WorldWideScience

Sample records for neural spiking activity

  1. A Granger causality measure for point process models of ensemble neural spiking activity.

    Directory of Open Access Journals (Sweden)

    Sanggyun Kim

    2011-03-01

    Full Text Available The ability to identify directional interactions that occur among multiple neurons in the brain is crucial to an understanding of how groups of neurons cooperate in order to generate specific brain functions. However, an optimal method of assessing these interactions has not been established. Granger causality has proven to be an effective method for the analysis of the directional interactions between multiple sets of continuous-valued data, but cannot be applied to neural spike train recordings due to their discrete nature. This paper proposes a point process framework that enables Granger causality to be applied to point process data such as neural spike trains. The proposed framework uses the point process likelihood function to relate a neuron's spiking probability to possible covariates, such as its own spiking history and the concurrent activity of simultaneously recorded neurons. Granger causality is assessed based on the relative reduction of the point process likelihood of one neuron obtained excluding one of its covariates compared to the likelihood obtained using all of its covariates. The method was tested on simulated data, and then applied to neural activity recorded from the primary motor cortex (MI of a Felis catus subject. The interactions present in the simulated data were predicted with a high degree of accuracy, and when applied to the real neural data, the proposed method identified causal relationships between many of the recorded neurons. This paper proposes a novel method that successfully applies Granger causality to point process data, and has the potential to provide unique physiological insights when applied to neural spike trains.

  2. A Granger causality measure for point process models of ensemble neural spiking activity.

    Science.gov (United States)

    Kim, Sanggyun; Putrino, David; Ghosh, Soumya; Brown, Emery N

    2011-03-01

    The ability to identify directional interactions that occur among multiple neurons in the brain is crucial to an understanding of how groups of neurons cooperate in order to generate specific brain functions. However, an optimal method of assessing these interactions has not been established. Granger causality has proven to be an effective method for the analysis of the directional interactions between multiple sets of continuous-valued data, but cannot be applied to neural spike train recordings due to their discrete nature. This paper proposes a point process framework that enables Granger causality to be applied to point process data such as neural spike trains. The proposed framework uses the point process likelihood function to relate a neuron's spiking probability to possible covariates, such as its own spiking history and the concurrent activity of simultaneously recorded neurons. Granger causality is assessed based on the relative reduction of the point process likelihood of one neuron obtained excluding one of its covariates compared to the likelihood obtained using all of its covariates. The method was tested on simulated data, and then applied to neural activity recorded from the primary motor cortex (MI) of a Felis catus subject. The interactions present in the simulated data were predicted with a high degree of accuracy, and when applied to the real neural data, the proposed method identified causal relationships between many of the recorded neurons. This paper proposes a novel method that successfully applies Granger causality to point process data, and has the potential to provide unique physiological insights when applied to neural spike trains.

  3. Auto-deleting brain machine interface: Error detection using spiking neural activity in the motor cortex.

    Science.gov (United States)

    Even-Chen, Nir; Stavisky, Sergey D; Kao, Jonathan C; Ryu, Stephen I; Shenoy, Krishna V

    2015-01-01

    Brain machine interfaces (BMIs) aim to assist people with paralysis by increasing their independence and ability to communicate, e.g., by using a cursor-based virtual keyboard. Current BMI clinical trials are hampered by modest performance that causes selection of wrong characters (errors) and thus reduces achieved typing rate. If it were possible to detect these errors without explicit knowledge of the task goal, this could be used to automatically "undo" wrong selections or even prevent upcoming wrong selections. We decoded imminent or recent errors during closed-loop BMI control from intracortical spiking neural activity. In our experiment, a non-human primate controlled a neurally-driven BMI cursor to acquire targets on a grid, which simulates a virtual keyboard. In offline analyses of this closed-loop BMI control data, we identified motor cortical neural signals indicative of task error occurrence. We were able to detect task outcomes (97% accuracy) and even predict upcoming task outcomes (86% accuracy) using neural activity alone. This novel strategy may help increase the performance and clinical viability of BMIs.

  4. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia

    Science.gov (United States)

    Kim, Sung-Phil; Simeral, John D.; Hochberg, Leigh R.; Donoghue, John P.; Black, Michael J.

    2008-12-01

    Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. Disclosure. JPD is the Chief Scientific Officer and a director of Cyberkinetics Neurotechnology Systems (CYKN); he holds stock and receives compensation. JDS has been a consultant for CYKN. LRH receives clinical trial support from CYKN.

  5. The effects of dynamical synapses on firing rate activity: a spiking neural network model.

    Science.gov (United States)

    Khalil, Radwa; Moftah, Marie Z; Moustafa, Ahmed A

    2017-11-01

    Accumulating evidence relates the fine-tuning of synaptic maturation and regulation of neural network activity to several key factors, including GABA A signaling and a lateral spread length between neighboring neurons (i.e., local connectivity). Furthermore, a number of studies consider short-term synaptic plasticity (STP) as an essential element in the instant modification of synaptic efficacy in the neuronal network and in modulating responses to sustained ranges of external Poisson input frequency (IF). Nevertheless, evaluating the firing activity in response to the dynamical interaction between STP (triggered by ranges of IF) and these key parameters in vitro remains elusive. Therefore, we designed a spiking neural network (SNN) model in which we incorporated the following parameters: local density of arbor essences and a lateral spread length between neighboring neurons. We also created several network scenarios based on these key parameters. Then, we implemented two classes of STP: (1) short-term synaptic depression (STD) and (2) short-term synaptic facilitation (STF). Each class has two differential forms based on the parametric value of its synaptic time constant (either for depressing or facilitating synapses). Lastly, we compared the neural firing responses before and after the treatment with STP. We found that dynamical synapses (STP) have a critical differential role on evaluating and modulating the firing rate activity in each network scenario. Moreover, we investigated the impact of changing the balance between excitation (E) and inhibition (I) on stabilizing this firing activity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. A customizable stochastic state point process filter (SSPPF) for neural spiking activity.

    Science.gov (United States)

    Xin, Yao; Li, Will X Y; Min, Biao; Han, Yan; Cheung, Ray C C

    2013-01-01

    Stochastic State Point Process Filter (SSPPF) is effective for adaptive signal processing. In particular, it has been successfully applied to neural signal coding/decoding in recent years. Recent work has proven its efficiency in non-parametric coefficients tracking in modeling of mammal nervous system. However, existing SSPPF has only been realized in commercial software platforms which limit their computational capability. In this paper, the first hardware architecture of SSPPF has been designed and successfully implemented on field-programmable gate array (FPGA), proving a more efficient means for coefficient tracking in a well-established generalized Laguerre-Volterra model for mammalian hippocampal spiking activity research. By exploring the intrinsic parallelism of the FPGA, the proposed architecture is able to process matrices or vectors with random size, and is efficiently scalable. Experimental result shows its superior performance comparing to the software implementation, while maintaining the numerical precision. This architecture can also be potentially utilized in the future hippocampal cognitive neural prosthesis design.

  7. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  8. Implementing Signature Neural Networks with Spiking Neurons

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  9. An overview of Bayesian methods for neural spike train analysis.

    Science.gov (United States)

    Chen, Zhe

    2013-01-01

    Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity. Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels. On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation. On the application side, the topics include spike sorting, tuning curve estimation, neural encoding and decoding, deconvolution of spike trains from calcium imaging signals, and inference of neuronal functional connectivity and synchrony. Some research challenges and opportunities for neural spike train analysis are discussed.

  10. An Overview of Bayesian Methods for Neural Spike Train Analysis

    Directory of Open Access Journals (Sweden)

    Zhe Chen

    2013-01-01

    Full Text Available Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity. Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels. On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation. On the application side, the topics include spike sorting, tuning curve estimation, neural encoding and decoding, deconvolution of spike trains from calcium imaging signals, and inference of neuronal functional connectivity and synchrony. Some research challenges and opportunities for neural spike train analysis are discussed.

  11. Multiple-color optical activation, silencing, and desynchronization of neural activity, with single-spike temporal resolution.

    Directory of Open Access Journals (Sweden)

    Xue Han

    Full Text Available The quest to determine how precise neural activity patterns mediate computation, behavior, and pathology would be greatly aided by a set of tools for reliably activating and inactivating genetically targeted neurons, in a temporally precise and rapidly reversible fashion. Having earlier adapted a light-activated cation channel, channelrhodopsin-2 (ChR2, for allowing neurons to be stimulated by blue light, we searched for a complementary tool that would enable optical neuronal inhibition, driven by light of a second color. Here we report that targeting the codon-optimized form of the light-driven chloride pump halorhodopsin from the archaebacterium Natronomas pharaonis (hereafter abbreviated Halo to genetically-specified neurons enables them to be silenced reliably, and reversibly, by millisecond-timescale pulses of yellow light. We show that trains of yellow and blue light pulses can drive high-fidelity sequences of hyperpolarizations and depolarizations in neurons simultaneously expressing yellow light-driven Halo and blue light-driven ChR2, allowing for the first time manipulations of neural synchrony without perturbation of other parameters such as spiking rates. The Halo/ChR2 system thus constitutes a powerful toolbox for multichannel photoinhibition and photostimulation of virally or transgenically targeted neural circuits without need for exogenous chemicals, enabling systematic analysis and engineering of the brain, and quantitative bioengineering of excitable cells.

  12. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated...

  13. Decoding Lower Limb Muscle Activity and Kinematics from Cortical Neural Spike Trains during Monkey Performing Stand and Squat Movements

    Science.gov (United States)

    Ma, Xuan; Ma, Chaolin; Huang, Jian; Zhang, Peng; Xu, Jiang; He, Jiping

    2017-01-01

    Extensive literatures have shown approaches for decoding upper limb kinematics or muscle activity using multichannel cortical spike recordings toward brain machine interface (BMI) applications. However, similar topics regarding lower limb remain relatively scarce. We previously reported a system for training monkeys to perform visually guided stand and squat tasks. The current study, as a follow-up extension, investigates whether lower limb kinematics and muscle activity characterized by electromyography (EMG) signals during monkey performing stand/squat movements can be accurately decoded from neural spike trains in primary motor cortex (M1). Two monkeys were used in this study. Subdermal intramuscular EMG electrodes were implanted to 8 right leg/thigh muscles. With ample data collected from neurons from a large brain area, we performed a spike triggered average (SpTA) analysis and got a series of density contours which revealed the spatial distributions of different muscle-innervating neurons corresponding to each given muscle. Based on the guidance of these results, we identified the locations optimal for chronic electrode implantation and subsequently carried on chronic neural data recordings. A recursive Bayesian estimation framework was proposed for decoding EMG signals together with kinematics from M1 spike trains. Two specific algorithms were implemented: a standard Kalman filter and an unscented Kalman filter. For the latter one, an artificial neural network was incorporated to deal with the nonlinearity in neural tuning. High correlation coefficient and signal to noise ratio between the predicted and the actual data were achieved for both EMG signals and kinematics on both monkeys. Higher decoding accuracy and faster convergence rate could be achieved with the unscented Kalman filter. These results demonstrate that lower limb EMG signals and kinematics during monkey stand/squat can be accurately decoded from a group of M1 neurons with the proposed

  14. Vectorized algorithms for spiking neural network simulation.

    Science.gov (United States)

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  15. Spiking neural P systems with multiple channels.

    Science.gov (United States)

    Peng, Hong; Yang, Jinyu; Wang, Jun; Wang, Tao; Sun, Zhang; Song, Xiaoxiao; Luo, Xiaohui; Huang, Xiangnian

    2017-11-01

    Spiking neural P systems (SNP systems, in short) are a class of distributed parallel computing systems inspired from the neurophysiological behavior of biological spiking neurons. In this paper, we investigate a new variant of SNP systems in which each neuron has one or more synaptic channels, called spiking neural P systems with multiple channels (SNP-MC systems, in short). The spiking rules with channel label are introduced to handle the firing mechanism of neurons, where the channel labels indicate synaptic channels of transmitting the generated spikes. The computation power of SNP-MC systems is investigated. Specifically, we prove that SNP-MC systems are Turing universal as both number generating and number accepting devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  17. Delayed afterdepolarization and spontaneous secondary spiking in a simple model of neural activity

    Science.gov (United States)

    Klinshov, V. V.; Nekorkin, V. I.

    2012-03-01

    In this paper we suggest a new dynamical model of neuron excitability. It is based on the classical FitzHugh-Nagumo model in which we introduce the third variable for additional ionic current. By using the method of fast and slow motions we study the afterdepolarization, spontaneous secondary spiking and tonic spiking effects. We build regions in the parameter space that correspond to different dynamical regimes. The obtained results may be important for different problems of neuroscience, e.g. for the problem of working memory.

  18. iSpike: a spiking neural interface for the iCub robot.

    Science.gov (United States)

    Gamez, D; Fidjeland, A K; Lazdins, E

    2012-06-01

    This paper presents iSpike: a C++ library that interfaces between spiking neural network simulators and the iCub humanoid robot. It uses a biologically inspired approach to convert the robot's sensory information into spikes that are passed to the neural network simulator, and it decodes output spikes from the network into motor signals that are sent to control the robot. Applications of iSpike range from embodied models of the brain to the development of intelligent robots using biologically inspired spiking neural networks. iSpike is an open source library that is available for free download under the terms of the GPL.

  19. Inferring oscillatory modulation in neural spike trains.

    Directory of Open Access Journals (Sweden)

    Kensuke Arai

    2017-10-01

    Full Text Available Oscillations are observed at various frequency bands in continuous-valued neural recordings like the electroencephalogram (EEG and local field potential (LFP in bulk brain matter, and analysis of spike-field coherence reveals that spiking of single neurons often occurs at certain phases of the global oscillation. Oscillatory modulation has been examined in relation to continuous-valued oscillatory signals, and independently from the spike train alone, but behavior or stimulus triggered firing-rate modulation, spiking sparseness, presence of slow modulation not locked to stimuli and irregular oscillations with large variability in oscillatory periods, present challenges to searching for temporal structures present in the spike train. In order to study oscillatory modulation in real data collected under a variety of experimental conditions, we describe a flexible point-process framework we call the Latent Oscillatory Spike Train (LOST model to decompose the instantaneous firing rate in biologically and behaviorally relevant factors: spiking refractoriness, event-locked firing rate non-stationarity, and trial-to-trial variability accounted for by baseline offset and a stochastic oscillatory modulation. We also extend the LOST model to accommodate changes in the modulatory structure over the duration of the experiment, and thereby discover trial-to-trial variability in the spike-field coherence of a rat primary motor cortical neuron to the LFP theta rhythm. Because LOST incorporates a latent stochastic auto-regressive term, LOST is able to detect oscillations when the firing rate is low, the modulation is weak, and when the modulating oscillation has a broad spectral peak.

  20. Inferring oscillatory modulation in neural spike trains.

    Science.gov (United States)

    Arai, Kensuke; Kass, Robert E

    2017-10-01

    Oscillations are observed at various frequency bands in continuous-valued neural recordings like the electroencephalogram (EEG) and local field potential (LFP) in bulk brain matter, and analysis of spike-field coherence reveals that spiking of single neurons often occurs at certain phases of the global oscillation. Oscillatory modulation has been examined in relation to continuous-valued oscillatory signals, and independently from the spike train alone, but behavior or stimulus triggered firing-rate modulation, spiking sparseness, presence of slow modulation not locked to stimuli and irregular oscillations with large variability in oscillatory periods, present challenges to searching for temporal structures present in the spike train. In order to study oscillatory modulation in real data collected under a variety of experimental conditions, we describe a flexible point-process framework we call the Latent Oscillatory Spike Train (LOST) model to decompose the instantaneous firing rate in biologically and behaviorally relevant factors: spiking refractoriness, event-locked firing rate non-stationarity, and trial-to-trial variability accounted for by baseline offset and a stochastic oscillatory modulation. We also extend the LOST model to accommodate changes in the modulatory structure over the duration of the experiment, and thereby discover trial-to-trial variability in the spike-field coherence of a rat primary motor cortical neuron to the LFP theta rhythm. Because LOST incorporates a latent stochastic auto-regressive term, LOST is able to detect oscillations when the firing rate is low, the modulation is weak, and when the modulating oscillation has a broad spectral peak.

  1. Spike Neural Models Part II: Abstract Neural Models

    OpenAIRE

    Johnson, Melissa G.; Chartier, Sylvain

    2018-01-01

    Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN) though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF) model whic...

  2. Phase diagram of spiking neural networks.

    Science.gov (United States)

    Seyed-Allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters - excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli.

  3. Spiking Neural P Systems with Communication on Request.

    Science.gov (United States)

    Pan, Linqiang; Păun, Gheorghe; Zhang, Gexiang; Neri, Ferrante

    2017-12-01

    Spiking Neural [Formula: see text] Systems are Neural System models characterized by the fact that each neuron mimics a biological cell and the communication between neurons is based on spikes. In the Spiking Neural [Formula: see text] systems investigated so far, the application of evolution rules depends on the contents of a neuron (checked by means of a regular expression). In these [Formula: see text] systems, a specified number of spikes are consumed and a specified number of spikes are produced, and then sent to each of the neurons linked by a synapse to the evolving neuron. [Formula: see text]In the present work, a novel communication strategy among neurons of Spiking Neural [Formula: see text] Systems is proposed. In the resulting models, called Spiking Neural [Formula: see text] Systems with Communication on Request, the spikes are requested from neighboring neurons, depending on the contents of the neuron (still checked by means of a regular expression). Unlike the traditional Spiking Neural [Formula: see text] systems, no spikes are consumed or created: the spikes are only moved along synapses and replicated (when two or more neurons request the contents of the same neuron). [Formula: see text]The Spiking Neural [Formula: see text] Systems with Communication on Request are proved to be computationally universal, that is, equivalent with Turing machines as long as two types of spikes are used. Following this work, further research questions are listed to be open problems.

  4. Spiking modular neural networks: A neural network modeling approach for hydrological processes

    National Research Council Canada - National Science Library

    Kamban Parasuraman; Amin Elshorbagy; Sean K. Carey

    2006-01-01

    .... In this study, a novel neural network model called the spiking modular neural networks (SMNNs) is proposed. An SMNN consists of an input layer, a spiking layer, and an associator neural network layer...

  5. Effects of Spike Anticipation on the Spiking Dynamics of Neural Networks

    Directory of Open Access Journals (Sweden)

    Daniel ede Santos-Sierra

    2015-11-01

    Full Text Available Synchronization is one of the central phenomena involved in information processing in living systems. It is known that the nervous system requires the coordinated activity of both local and distant neural populations. Such an interplay allows to merge different information modalities in a whole processing supporting high-level mental skills as understanding, memory, abstraction, etc. Though the biological processes underlying synchronization in the brain are not fully understood there have been reported a variety of mechanisms supporting different types of synchronization both at theoretical and experimental level. One of the more intriguing of these phenomena is the anticipating synchronization, which has been recently reported in a pair of unidirectionally coupled artificial neurons under simple conditions cite{Pyragas}, where the slave neuron is able to anticipate in time the behaviour of the master one. In this paper we explore the effect of spike anticipation over the information processing performed by a neural network at functional and structural level. We show that the introduction of intermediary neurons in the network enhances spike anticipation and analyse how these variations in spike anticipation can significantly change the firing regime of the neural network according to its functional and structural properties. In addition we show that the interspike interval (ISI, one of the main features of the neural response associated to the information coding, can be closely related to spike anticipation by each spike, and how synaptic plasticity can be modulated through that relationship. This study has been performed through numerical simulation of a coupled system of Hindmarsh-Rose neurons.

  6. Phase Diagram of Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamed eSeyed-Allaei

    2015-03-01

    Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network.

  7. Evolving Spiking Neural Networks for Control of Artificial Creatures

    Directory of Open Access Journals (Sweden)

    Arash Ahmadi

    2013-10-01

    Full Text Available To understand and analysis behavior of complicated and intelligent organisms, scientists apply bio-inspired concepts including evolution and learning to mathematical models and analyses. Researchers utilize these perceptions in different applications, searching for improved methods andapproaches for modern computational systems. This paper presents a genetic algorithm based evolution framework in which Spiking Neural Network (SNN of artificial creatures are evolved for higher chance of survival in a virtual environment. The artificial creatures are composed ofrandomly connected Izhikevich spiking reservoir neural networks using population activity rate coding. Inspired by biological neurons, the neuronal connections are considered with different axonal conduction delays. Simulations results prove that the evolutionary algorithm has thecapability to find or synthesis artificial creatures which can survive in the environment successfully.

  8. A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks.

    Science.gov (United States)

    Xu, Yan; Zeng, Xiaoqin; Han, Lixin; Yang, Jing

    2013-07-01

    We use a supervised multi-spike learning algorithm for spiking neural networks (SNNs) with temporal encoding to simulate the learning mechanism of biological neurons in which the SNN output spike trains are encoded by firing times. We first analyze why existing gradient-descent-based learning methods for SNNs have difficulty in achieving multi-spike learning. We then propose a new multi-spike learning method for SNNs based on gradient descent that solves the problems of error function construction and interference among multiple output spikes during learning. The method could be widely applied to single spiking neurons to learn desired output spike trains and to multilayer SNNs to solve classification problems. By overcoming learning interference among multiple spikes, our method has high learning accuracy when there are a relatively large number of output spikes in need of learning. We also develop an output encoding strategy with respect to multiple spikes for classification problems. This effectively improves the classification accuracy of multi-spike learning compared to that of single-spike learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); S.M. Bohte (Sander)

    2016-01-01

    textabstractBiological neurons communicate with a sparing exchange of pulses - spikes. It is an open question how real spiking neurons produce the kind of powerful neural computation that is possible with deep artificial neural networks, using only so very few spikes to communicate. Building on

  10. A stimulus-dependent spike threshold is an optimal neural coder

    Directory of Open Access Journals (Sweden)

    Douglas L Jones

    2015-06-01

    Full Text Available A neural code based on sequences of spikes can consume a significant portion of the brain’s energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding and fidelity (decoding. The threshold mimics a post-synaptic membrane (a low-pass filter and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint. The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code.

  11. Robust spike classification based on frequency domain neural waveform features.

    Science.gov (United States)

    Yang, Chenhui; Yuan, Yuan; Si, Jennie

    2013-12-01

    We introduce a new spike classification algorithm based on frequency domain features of the spike snippets. The goal for the algorithm is to provide high classification accuracy, low false misclassification, ease of implementation, robustness to signal degradation, and objectivity in classification outcomes. In this paper, we propose a spike classification algorithm based on frequency domain features (CFDF). It makes use of frequency domain contents of the recorded neural waveforms for spike classification. The self-organizing map (SOM) is used as a tool to determine the cluster number intuitively and directly by viewing the SOM output map. After that, spike classification can be easily performed using clustering algorithms such as the k-Means. In conjunction with our previously developed multiscale correlation of wavelet coefficient (MCWC) spike detection algorithm, we show that the MCWC and CFDF detection and classification system is robust when tested on several sets of artificial and real neural waveforms. The CFDF is comparable to or outperforms some popular automatic spike classification algorithms with artificial and real neural data. The detection and classification of neural action potentials or neural spikes is an important step in single-unit-based neuroscientific studies and applications. After the detection of neural snippets potentially containing neural spikes, a robust classification algorithm is applied for the analysis of the snippets to (1) extract similar waveforms into one class for them to be considered coming from one unit, and to (2) remove noise snippets if they do not contain any features of an action potential. Usually, a snippet is a small 2 or 3 ms segment of the recorded waveform, and differences in neural action potentials can be subtle from one unit to another. Therefore, a robust, high performance classification system like the CFDF is necessary. In addition, the proposed algorithm does not require any assumptions on statistical

  12. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding

    National Research Council Canada - National Science Library

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale...

  13. Enhancement of Spike-Timing-Dependent Plasticity in Spiking Neural Systems with Noise.

    Science.gov (United States)

    Nobukawa, Sou; Nishimura, Haruhiko

    2016-08-01

    Synaptic plasticity is widely recognized to support adaptable information processing in the brain. Spike-timing-dependent plasticity, one subtype of plasticity, can lead to synchronous spike propagation with temporal spiking coding information. Recently, it was reported that in a noisy environment, like the actual brain, the spike-timing-dependent plasticity may be made efficient by the effect of stochastic resonance. In the stochastic resonance, the presence of noise helps a nonlinear system in amplifying a weak (under barrier) signal. However, previous studies have ignored the full variety of spiking patterns and many relevant factors in neural dynamics. Thus, in order to prove the physiological possibility for the enhancement of spike-timing-dependent plasticity by stochastic resonance, it is necessary to demonstrate that this stochastic resonance arises in realistic cortical neural systems. In this study, we evaluate this stochastic resonance phenomenon in the realistic cortical neural system described by the Izhikevich neuron model and compare the characteristics of typical spiking patterns of regular spiking, intrinsically bursting and chattering experimentally observed in the cortex.

  14. Spiking Neural Networks based on OxRAM Synapses for Real-time Unsupervised Spike Sorting

    Directory of Open Access Journals (Sweden)

    Thilo Werner

    2016-11-01

    Full Text Available In this paper, we present an alternative approach to perform spike sorting of complex brain signals based on spiking neural networks (SNN. The proposed architecture is suitable for hardware implementation by using RRAM technology for the implementation of synapses whose low latency (< 1μs enable real-time spike sorting. This offers promising advantagesto conventional spike sorting techniques for brain-computer interface and neural prosthesis applications. Moreover, the ultralow power consumption of the RRAM synapses of the spiking neural network (nW range may enable the design of autonomous implantable devices for rehabilitation purposes. We demonstrate an original methodology to use Oxide based RRAM (OxRAM as easy to program and low power (< 75 pJ synapses. Synaptic weights are modulated through the application of an online learning strategy inspired by biological Spike Timing Dependent Plasticity. Real spiking data have been recorded both intraand extracellularly from an in-vitro preparation of the Crayfish sensory-motor system and used for validation of the proposed OxRAM based SNN. This artificial SNN is able to identify, learn, recognize and distinguish between different spike shapes in the input signal with a recognition rate about 90% without any supervision.

  15. Neuronal spike sorting based on radial basis function neural networks

    Directory of Open Access Journals (Sweden)

    Taghavi Kani M

    2011-02-01

    Full Text Available "nBackground: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system."n "nMethods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used."n "nResults: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity."n "nConclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting.

  16. Sparse Data Analysis Strategy for Neural Spike Classification

    Directory of Open Access Journals (Sweden)

    Vincent Vigneron

    2014-01-01

    Full Text Available Many of the multichannel extracellular recordings of neural activity consist of attempting to sort spikes on the basis of shared characteristics with some feature detection techniques. Then spikes can be sorted into distinct clusters. There are in general two main statistical issues: firstly, spike sorting can result in well-sorted units, but by with no means one can be sure that one is dealing with single units due to the number of neurons adjacent to the recording electrode. Secondly, the waveform dimensionality is reduced in a small subset of discriminating features. This shortening dimension effort was introduced as an aid to visualization and manual clustering, but also to reduce the computational complexity in automatic classification. We introduce a metric based on common neighbourhood to introduce sparsity in the dataset and separate data into more homogeneous subgroups. The approach is particularly well suited for clustering when the individual clusters are elongated (that is nonspherical. In addition it does need not to select the number of clusters, it is very efficient to visualize clusters in a dataset, it is robust to noise, it can handle imbalanced data, and it is fully automatic and deterministic.

  17. Financial time series prediction using spiking neural networks.

    Directory of Open Access Journals (Sweden)

    David Reid

    Full Text Available In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  18. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan

    2015-04-01

    Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.

  19. Adaptive inverse control of neural spatiotemporal spike patterns with a reproducing kernel Hilbert space (RKHS) framework.

    Science.gov (United States)

    Li, Lin; Park, Il Memming; Brockmeier, Austin; Chen, Badong; Seth, Sohan; Francis, Joseph T; Sanchez, Justin C; Príncipe, José C

    2013-07-01

    The precise control of spiking in a population of neurons via applied electrical stimulation is a challenge due to the sparseness of spiking responses and neural system plasticity. We pose neural stimulation as a system control problem where the system input is a multidimensional time-varying signal representing the stimulation, and the output is a set of spike trains; the goal is to drive the output such that the elicited population spiking activity is as close as possible to some desired activity, where closeness is defined by a cost function. If the neural system can be described by a time-invariant (homogeneous) model, then offline procedures can be used to derive the control procedure; however, for arbitrary neural systems this is not tractable. Furthermore, standard control methodologies are not suited to directly operate on spike trains that represent both the target and elicited system response. In this paper, we propose a multiple-input multiple-output (MIMO) adaptive inverse control scheme that operates on spike trains in a reproducing kernel Hilbert space (RKHS). The control scheme uses an inverse controller to approximate the inverse of the neural circuit. The proposed control system takes advantage of the precise timing of the neural events by using a Schoenberg kernel defined directly in the space of spike trains. The Schoenberg kernel maps the spike train to an RKHS and allows linear algorithm to control the nonlinear neural system without the danger of converging to local minima. During operation, the adaptation of the controller minimizes a difference defined in the spike train RKHS between the system and the target response and keeps the inverse controller close to the inverse of the current neural circuit, which enables adapting to neural perturbations. The results on a realistic synthetic neural circuit show that the inverse controller based on the Schoenberg kernel outperforms the decoding accuracy of other models based on the conventional rate

  20. Testing of information condensation in a model reverberating spiking neural network.

    Science.gov (United States)

    Vidybida, Alexander

    2011-06-01

    Information about external world is delivered to the brain in the form of structured in time spike trains. During further processing in higher areas, information is subjected to a certain condensation process, which results in formation of abstract conceptual images of external world, apparently, represented as certain uniform spiking activity partially independent on the input spike trains details. Possible physical mechanism of condensation at the level of individual neuron was discussed recently. In a reverberating spiking neural network, due to this mechanism the dynamics should settle down to the same uniform/ periodic activity in response to a set of various inputs. Since the same periodic activity may correspond to different input spike trains, we interpret this as possible candidate for information condensation mechanism in a network. Our purpose is to test this possibility in a network model consisting of five fully connected neurons, particularly, the influence of geometric size of the network, on its ability to condense information. Dynamics of 20 spiking neural networks of different geometric sizes are modelled by means of computer simulation. Each network was propelled into reverberating dynamics by applying various initial input spike trains. We run the dynamics until it becomes periodic. The Shannon's formula is used to calculate the amount of information in any input spike train and in any periodic state found. As a result, we obtain explicit estimate of the degree of information condensation in the networks, and conclude that it depends strongly on the net's geometric size.

  1. Linking structure and activity in nonlinear spiking networks.

    Directory of Open Access Journals (Sweden)

    Gabriel Koch Ocker

    2017-06-01

    Full Text Available Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks' spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities-including those of different cell types-combine with connectivity to shape population activity and function.

  2. Efficiently passing messages in distributed spiking neural network simulation.

    Science.gov (United States)

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  3. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Lars Buesing

    2011-11-01

    Full Text Available The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  4. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  5. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

  6. Dual roles for spike signaling in cortical neural populations

    Directory of Open Access Journals (Sweden)

    Dana eBallard

    2011-06-01

    Full Text Available A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning post-stimulus histograms and exponential interval histograms. In addition it makes testable predictions that follow from the γ latency coding.

  7. Spiking neural network-based control chart pattern recognition

    Directory of Open Access Journals (Sweden)

    Medhat H.A. Awadalla

    2012-03-01

    Full Text Available Due to an increasing competition in products, consumers have become more critical in choosing products. The quality of products has become more important. Statistical Process Control (SPC is usually used to improve the quality of products. Control charting plays the most important role in SPC. Control charts help to monitor the behavior of the process to determine whether it is stable or not. Unnatural patterns in control charts mean that there are some unnatural causes for variations in SPC. Spiking neural networks (SNNs are the third generation of artificial neural networks that consider time as an important feature for information representation and processing. In this paper, a spiking neural network architecture is proposed to be used for control charts pattern recognition (CCPR. Furthermore, enhancements to the SpikeProp learning algorithm are proposed. These enhancements provide additional learning rules for the synaptic delays, time constants and for the neurons thresholds. Simulated experiments have been conducted and the achieved results show a remarkable improvement in the overall performance compared with artificial neural networks.

  8. SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.

    Science.gov (United States)

    Jimenez-Romero, Cristian; Johnson, Jeffrey

    2017-01-01

    The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.

  9. Spike detection from noisy neural data in linear-probe recordings.

    Science.gov (United States)

    Takekawa, Takashi; Ota, Keisuke; Murayama, Masanori; Fukai, Tomoki

    2014-06-01

    Simultaneous recordings of multiple neuron activities with multi-channel extracellular electrodes are widely used for studying information processing by the brain's neural circuits. In this method, the recorded signals containing the spike events of a number of adjacent or distant neurons must be correctly sorted into spike trains of individual neurons, and a variety of methods have been proposed for this spike sorting. However, spike sorting is computationally difficult because the recorded signals are often contaminated by biological noise. Here, we propose a novel method for spike detection, which is the first stage of spike sorting and hence crucially determines overall sorting performance. Our method utilizes a model of extracellular recording data that takes into account variations in spike waveforms, such as the widths and amplitudes of spikes, by detecting the peaks of band-pass-filtered data. We show that the new method significantly improves the cost-performance of multi-channel electrode recordings by increasing the number of cleanly sorted neurons. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.

    Science.gov (United States)

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

  11. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  12. Classification of epileptiform and wicket spike of EEG pattern using backpropagation neural network

    Science.gov (United States)

    Puspita, Juni Wijayanti; Jaya, Agus Indra; Gunadharma, Suryani

    2017-03-01

    Epilepsy is characterized by recurrent seizures that is resulted by permanent brain abnormalities. One of tools to support the diagnosis of epilepsy is Electroencephalograph (EEG), which describes the recording of brain electrical activity. Abnormal EEG patterns in epilepsy patients consist of Spike and Sharp waves. While both waves, there is a normal pattern that sometimes misinterpreted as epileptiform by electroenchepalographer (EEGer), namely Wicket Spike. The main difference of the three waves are on the time duration that related to the frequency. In this study, we proposed a method to classify a EEG wave into Sharp wave, Spike wave or Wicket spike group using Backpropagation Neural Network based on the frequency and amplitude of each wave. The results show that the proposed method can classifies the three group of waves with good accuracy.

  13. A 41 μW real-time adaptive neural spike classifier

    NARCIS (Netherlands)

    Zjajo, A.; van Leuken, T.G.R.M.

    2016-01-01

    Robust, power- and area-efficient spike classifier, capable of accurate identification of the neural spikes even for low SNR, is a prerequisite for the real-time, implantable, closed-loop brain-machine interface. In this paper, we propose an easily-scalable, 128-channel, programmable, neural spike

  14. Fixed latency on-chip interconnect for hardware spiking neural network architectures

    NARCIS (Netherlands)

    Pande, Sandeep; Morgan, Fearghal; Smit, Gerardus Johannes Maria; Bruintjes, Tom; Rutgers, J.H.; Cawley, Seamus; Harkin, Jim; McDaid, Liam

    Information in a Spiking Neural Network (SNN) is encoded as the relative timing between spikes. Distortion in spike timings can impact the accuracy of SNN operation by modifying the precise firing time of neurons within the SNN. Maintaining the integrity of spike timings is crucial for reliable

  15. Simultaneous Measurement of Neural Spike Recordings and Multi-Photon Calcium Imaging in Neuroblastoma Cells

    Directory of Open Access Journals (Sweden)

    Jeehyun Kim

    2012-11-01

    Full Text Available This paper proposes the design and implementation of a micro-electrode array (MEA for neuroblastoma cell culturing. It also explains the implementation of a multi-photon microscope (MPM customized for neuroblastoma cell excitation and imaging under ambient light. Electrical signal and fluorescence images were simultaneously acquired from the neuroblastoma cells on the MEA. MPM calcium images of the cultured neuroblastoma cell on the MEA are presented and also the neural activity was acquired through the MEA recording. A calcium green-1 (CG-1 dextran conjugate of 10,000 D molecular weight was used in this experiment for calcium imaging. This study also evaluated the calcium oscillations and neural spike recording of neuroblastoma cells in an epileptic condition. Based on our observation of neural spikes in neuroblastoma cells with our proposed imaging modality, we report that neuroblastoma cells can be an important model for epileptic activity studies.

  16. Spike neural models (part I: The Hodgkin-Huxley model

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2017-05-01

    Full Text Available Artificial neural networks, or ANNs, have grown a lot since their inception back in the 1940s. But no matter the changes, one of the most important components of neural networks is still the node, which represents the neuron. Within spiking neural networks, the node is especially important because it contains the functions and properties of neurons that are necessary for their network. One important aspect of neurons is the ionic flow which produces action potentials, or spikes. Forces of diffusion and electrostatic pressure work together with the physical properties of the cell to move ions around changing the cell membrane potential which ultimately produces the action potential. This tutorial reviews the Hodkgin-Huxley model and shows how it simulates the ionic flow of the giant squid axon via four differential equations. The model is implemented in Matlab using Euler's Method to approximate the differential equations. By using Euler's method, an extra parameter is created, the time step. This new parameter needs to be carefully considered or the results of the node may be impaired.

  17. Topological analysis of chaos in neural spike train bursts

    Energy Technology Data Exchange (ETDEWEB)

    Gilmore, R. (Department of Physics and Atmospheric Science, Drexel University, Philadelphia, Pennsylvania 19104 (United States)); Pei, X.; Moss, F. (Center for Neurodynamics, University of Missouri, St. Louis, Missouri 63121 (United States))

    1999-09-01

    We show how a topological model which describes the stretching and squeezing mechanisms responsible for creating chaotic behavior can be extracted from the neural spike train data. The mechanism we have identified is the same one ([open quotes]gateau roul[acute e],[close quotes] or jelly-roll) which has previously been identified in the Duffing oscillator [Gilmore and McCallum, Phys. Rev. E [bold 51], 935 (1995)] and in a YAG laser [Boulant [ital et al.], Phys. Rev. E [bold 55], 5082 (1997)]. [copyright] [ital 1999 American Institute of Physics.

  18. Biological modelling of a computational spiking neural network with neuronal avalanches

    Science.gov (United States)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  19. Different propagation speeds of recalled sequences in plastic spiking neural networks

    Science.gov (United States)

    Huang, Xuhui; Zheng, Zhigang; Hu, Gang; Wu, Si; Rasch, Malte J.

    2015-03-01

    Neural networks can generate spatiotemporal patterns of spike activity. Sequential activity learning and retrieval have been observed in many brain areas, and e.g. is crucial for coding of episodic memory in the hippocampus or generating temporal patterns during song production in birds. In a recent study, a sequential activity pattern was directly entrained onto the neural activity of the primary visual cortex (V1) of rats and subsequently successfully recalled by a local and transient trigger. It was observed that the speed of activity propagation in coordinates of the retinotopically organized neural tissue was constant during retrieval regardless how the speed of light stimulation sweeping across the visual field during training was varied. It is well known that spike-timing dependent plasticity (STDP) is a potential mechanism for embedding temporal sequences into neural network activity. How training and retrieval speeds relate to each other and how network and learning parameters influence retrieval speeds, however, is not well described. We here theoretically analyze sequential activity learning and retrieval in a recurrent neural network with realistic synaptic short-term dynamics and STDP. Testing multiple STDP rules, we confirm that sequence learning can be achieved by STDP. However, we found that a multiplicative nearest-neighbor (NN) weight update rule generated weight distributions and recall activities that best matched the experiments in V1. Using network simulations and mean-field analysis, we further investigated the learning mechanisms and the influence of network parameters on recall speeds. Our analysis suggests that a multiplicative STDP rule with dominant NN spike interaction might be implemented in V1 since recall speed was almost constant in an NMDA-dominant regime. Interestingly, in an AMPA-dominant regime, neural circuits might exhibit recall speeds that instead follow the change in stimulus speeds. This prediction could be tested in

  20. Event-driven processing for hardware-efficient neural spike sorting.

    Science.gov (United States)

    Liu, Yan; L Pereira, João; Constandinou, Timothy

    2017-10-05

    The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope for large-scale integration of neural recording systems. In such systems the hardware resource, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can here provide a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous time level-crossing sampling for efficient data representation and subsequent spike processing. We first compare signals (using synthetic neural datasets) that are encoded using this technique against conventional sampling. It is observed that considerably lower data rates are achievable when utilising 7 bits or less to represent the signals, whilst maintaining the signal fidelity. We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. The proposed method is implemented in a low power FPGA platform to demonstrate the hardware viability. Results obtained using both MATLAB and reconfigurable logic (FPGA) hardware indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware cost. Creative Commons Attribution license.

  1. Serial correlation in neural spike trains: experimental evidence, stochastic modeling, and single neuron variability.

    Science.gov (United States)

    Farkhooi, Farzad; Strube-Bloss, Martin F; Nawrot, Martin P

    2009-02-01

    The activity of spiking neurons is frequently described by renewal point process models that assume the statistical independence and identical distribution of the intervals between action potentials. However, the assumption of independent intervals must be questioned for many different types of neurons. We review experimental studies that reported the feature of a negative serial correlation of neighboring intervals, commonly observed in neurons in the sensory periphery as well as in central neurons, notably in the mammalian cortex. In our experiments we observed the same short-lived negative serial dependence of intervals in the spontaneous activity of mushroom body extrinsic neurons in the honeybee. To model serial interval correlations of arbitrary lags, we suggest a family of autoregressive point processes. Its marginal interval distribution is described by the generalized gamma model, which includes as special cases the log-normal and gamma distributions, which have been widely used to characterize regular spiking neurons. In numeric simulations we investigated how serial correlation affects the variance of the neural spike count. We show that the experimentally confirmed negative correlation reduces single-neuron variability, as quantified by the Fano factor, by up to 50%, which favors the transmission of a rate code. We argue that the feature of a negative serial correlation is likely to be common to the class of spike-frequency-adapting neurons and that it might have been largely overlooked in extracellular single-unit recordings due to spike sorting errors.

  2. Serial correlation in neural spike trains: Experimental evidence, stochastic modeling, and single neuron variability

    Science.gov (United States)

    Farkhooi, Farzad; Strube-Bloss, Martin F.; Nawrot, Martin P.

    2009-02-01

    The activity of spiking neurons is frequently described by renewal point process models that assume the statistical independence and identical distribution of the intervals between action potentials. However, the assumption of independent intervals must be questioned for many different types of neurons. We review experimental studies that reported the feature of a negative serial correlation of neighboring intervals, commonly observed in neurons in the sensory periphery as well as in central neurons, notably in the mammalian cortex. In our experiments we observed the same short-lived negative serial dependence of intervals in the spontaneous activity of mushroom body extrinsic neurons in the honeybee. To model serial interval correlations of arbitrary lags, we suggest a family of autoregressive point processes. Its marginal interval distribution is described by the generalized gamma model, which includes as special cases the log-normal and gamma distributions, which have been widely used to characterize regular spiking neurons. In numeric simulations we investigated how serial correlation affects the variance of the neural spike count. We show that the experimentally confirmed negative correlation reduces single-neuron variability, as quantified by the Fano factor, by up to 50%, which favors the transmission of a rate code. We argue that the feature of a negative serial correlation is likely to be common to the class of spike-frequency-adapting neurons and that it might have been largely overlooked in extracellular single-unit recordings due to spike sorting errors.

  3. Brian: a simulator for spiking neural networks in Python

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2008-11-01

    Full Text Available Brian is a new simulator for spiking neural networks, written in Python (http://brian.di.ens.fr. It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  4. SPANNER: A Self-Repairing Spiking Neural Network Hardware Architecture.

    Science.gov (United States)

    Liu, Junxiu; Harkin, Jim; Maguire, Liam P; McDaid, Liam J; Wade, John J

    2017-03-06

    Recent research has shown that a glial cell of astrocyte underpins a self-repair mechanism in the human brain, where spiking neurons provide direct and indirect feedbacks to presynaptic terminals. These feedbacks modulate the synaptic transmission probability of release (PR). When synaptic faults occur, the neuron becomes silent or near silent due to the low PR of synapses; whereby the PRs of remaining healthy synapses are then increased by the indirect feedback from the astrocyte cell. In this paper, a novel hardware architecture of Self-rePAiring spiking Neural NEtwoRk (SPANNER) is proposed, which mimics this self-repairing capability in the human brain. This paper demonstrates that the hardware can self-detect and self-repair synaptic faults without the conventional components for the fault detection and fault repairing. Experimental results show that SPANNER can maintain the system performance with fault densities of up to 40%, and more importantly SPANNER has only a 20% performance degradation when the self-repairing architecture is significantly damaged at a fault density of 80%.

  5. Knowledge extraction from evolving spiking neural networks with rank order population coding.

    Science.gov (United States)

    Soltic, Snjezana; Kasabov, Nikola

    2010-12-01

    This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.

  6. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size.

    Directory of Open Access Journals (Sweden)

    Tilo Schwalger

    2017-04-01

    Full Text Available Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.

  7. A case for spiking neural network simulation based on configurable multiple-FPGA systems.

    Science.gov (United States)

    Yang, Shufan; Wu, Qiang; Li, Renfa

    2011-09-01

    Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.

  8. Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.

    Science.gov (United States)

    Walter, Florian; Röhrbein, Florian; Knoll, Alois

    2015-12-01

    The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Neural cache: a low-power online digital spike-sorting architecture.

    Science.gov (United States)

    Peng, Chung-Ching; Sabharwal, Pawan; Bashirullah, Rizwan

    2008-01-01

    Transmitting large amounts of data sensed by multi-electrode array devices is widely considered to be a challenging problem in designing implantable neural recording systems. Spike sorting is an important step to reducing the data bandwidth before wireless data transmission. The feasibility of spike sorting algorithms in scaled CMOS technologies, which typically operate on low frequency neural spikes, is determined largely by its power consumption, a dominant portion of which is leakage power. Our goal is to explore energy saving architectures for online spike sorting without sacrificing performance. In the face of non-stationary neural data, training is not always a feasible option. We present a low-power architecture for real-time online spike sorting that does not require any training period and has the capability to quickly respond to the changing spike shapes.

  10. Weighted spiking neural P systems with structural plasticity working in sequential mode based on maximum spike number

    Science.gov (United States)

    Sun, Mingming; Qu, Jianhua

    2017-10-01

    Spiking neural P systems (SNP systems, in short) are a group of parallel and distributed computing devices inspired by the function and structure of spiking neurons. Recently, a new variant of SNP systems, called SNP systems with structural plasticity (SNPSP systems, in short) was proposed. In SNPSP systems, neuron can use plasticity ru les to create and delete synapses. In this work, we consider many restrictions sequentiality on SNPSP systems: (1) neuron with the maximum number of spikes is chosen to fire; (2) we use the weighted synapses. Specifically, we investigate the computational power of weighted SNPSP systems working in the sequential mode based on maximum spike number (WSNPSPM systems, in short) and we proved that SNPSP systems with these new restrictions are universal as generating devices.

  11. Spikes not slots: noise in neural populations limits working memory.

    Science.gov (United States)

    Bays, Paul M

    2015-08-01

    This opinion article argues that noise (randomness) in neural activity is the limiting factor in visual working memory (WM), determining how accurately we can maintain stable internal representations of external stimuli. Sharing of a fixed amount of neural activity between items in memory explains why WM can be successfully described as a continuous resource. This contrasts with the popular conception of WM as comprising a limited number of memory slots, each holding a representation of one stimulus - I argue that this view is challenged by computational theory and the latest neurophysiological evidence. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Informational lesions: optical perturbation of spike timing and neural synchrony via microbial opsin gene fusions

    Directory of Open Access Journals (Sweden)

    Xue Han

    2009-08-01

    Full Text Available Synchronous neural activity occurs throughout the brain in association with normal and pathological brain functions. Despite theoretical work exploring how such neural coordination might facilitate neural computation and be corrupted in disease states, it has proven difficult to test experimentally the causal role of synchrony in such phenomena. Attempts to manipulate neural synchrony often alter other features of neural activity such as firing rate. Here we evaluate a single gene which encodes for the blue-light gated cation channel channelrhodopsin-2 and the yellow-light driven chloride pump halorhodopsin from N. pharaonis, linked by a ‘self-cleaving’ 2A peptide. This fusion enables proportional expression of both opsins, sensitizing neurons to being bi-directionally controlled with blue and yellow light, facilitating proportional optical spike insertion and deletion upon delivery of trains of precisely-timed blue and yellow light pulses. Such approaches may enable more detailed explorations of the causal role of specific features of the neural code.

  13. Spatio-temporal conditional inference and hypothesis tests for neural ensemble spiking precision

    Science.gov (United States)

    Harrison, Matthew T.; Amarasingham, Asohan; Truccolo, Wilson

    2014-01-01

    The collective dynamics of neural ensembles create complex spike patterns with many spatial and temporal scales. Understanding the statistical structure of these patterns can help resolve fundamental questions about neural computation and neural dynamics. Spatio-temporal conditional inference (STCI) is introduced here as a semiparametric statistical framework for investigating the nature of precise spiking patterns from collections of neurons that is robust to arbitrarily complex and nonstationary coarse spiking dynamics. The main idea is to focus statistical modeling and inference, not on the full distribution of the data, but rather on families of conditional distributions of precise spiking given different types of coarse spiking. The framework is then used to develop families of hypothesis tests for probing the spatio-temporal precision of spiking patterns. Relationships among different conditional distributions are used to improve multiple hypothesis testing adjustments and to design novel Monte Carlo spike resampling algorithms. Of special note are algorithms that can locally jitter spike times while still preserving the instantaneous peri-stimulus time histogram (PSTH) or the instantaneous total spike count from a group of recorded neurons. The framework can also be used to test whether first-order maximum entropy models with possibly random and time-varying parameters can account for observed patterns of spiking. STCI provides a detailed example of the generic principle of conditional inference, which may be applicable in other areas of neurostatistical analysis. PMID:25380339

  14. A Low Noise Amplifier for Neural Spike Recording Interfaces

    Directory of Open Access Journals (Sweden)

    Jesus Ruiz-Amaya

    2015-09-01

    Full Text Available This paper presents a Low Noise Amplifier (LNA for neural spike recording applications. The proposed topology, based on a capacitive feedback network using a two-stage OTA, efficiently solves the triple trade-off between power, area and noise. Additionally, this work introduces a novel transistor-level synthesis methodology for LNAs tailored for the minimization of their noise efficiency factor under area and noise constraints. The proposed LNA has been implemented in a 130 nm CMOS technology and occupies 0.053 mm-sq. Experimental results show that the LNA offers a noise efficiency factor of 2.16 and an input referred noise of 3.8 μVrms for 1.2 V power supply. It provides a gain of 46 dB over a nominal bandwidth of 192 Hz–7.4 kHz and consumes 1.92 μW. The performance of the proposed LNA has been validated through in vivo experiments with animal models.

  15. A Low Noise Amplifier for Neural Spike Recording Interfaces.

    Science.gov (United States)

    Ruiz-Amaya, Jesus; Rodriguez-Perez, Alberto; Delgado-Restituto, Manuel

    2015-09-30

    This paper presents a Low Noise Amplifier (LNA) for neural spike recording applications. The proposed topology, based on a capacitive feedback network using a two-stage OTA, efficiently solves the triple trade-off between power, area and noise. Additionally, this work introduces a novel transistor-level synthesis methodology for LNAs tailored for the minimization of their noise efficiency factor under area and noise constraints. The proposed LNA has been implemented in a 130 nm CMOS technology and occupies 0.053 mm-sq. Experimental results show that the LNA offers a noise efficiency factor of 2.16 and an input referred noise of 3.8 μVrms for 1.2 V power supply. It provides a gain of 46 dB over a nominal bandwidth of 192 Hz-7.4 kHz and consumes 1.92 μW. The performance of the proposed LNA has been validated through in vivo experiments with animal models.

  16. Macroscopic phase-resetting curves for spiking neural networks

    Science.gov (United States)

    Dumont, Grégory; Ermentrout, G. Bard; Gutkin, Boris

    2017-10-01

    The study of brain rhythms is an open-ended, and challenging, subject of interest in neuroscience. One of the best tools for the understanding of oscillations at the single neuron level is the phase-resetting curve (PRC). Synchronization in networks of neurons, effects of noise on the rhythms, effects of transient stimuli on the ongoing rhythmic activity, and many other features can be understood by the PRC. However, most macroscopic brain rhythms are generated by large populations of neurons, and so far it has been unclear how the PRC formulation can be extended to these more common rhythms. In this paper, we describe a framework to determine a macroscopic PRC (mPRC) for a network of spiking excitatory and inhibitory neurons that generate a macroscopic rhythm. We take advantage of a thermodynamic approach combined with a reduction method to simplify the network description to a small number of ordinary differential equations. From this simplified but exact reduction, we can compute the mPRC via the standard adjoint method. Our theoretical findings are illustrated with and supported by numerical simulations of the full spiking network. Notably our mPRC framework allows us to predict the difference between effects of transient inputs to the excitatory versus the inhibitory neurons in the network.

  17. Learning Pitch with STDP: A Computational Model of Place and Temporal Pitch Perception Using Spiking Neural Networks.

    Directory of Open Access Journals (Sweden)

    Nafise Erfanian Saeedi

    2016-04-01

    Full Text Available Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons' action potentials (spikes as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.

  18. Spiking and LFP activity in PRR during symbolically instructed reaches.

    Science.gov (United States)

    Hwang, Eun Jung; Andersen, Richard A

    2012-02-01

    The spiking activity in the parietal reach region (PRR) represents the spatial goal of an impending reach when the reach is directed toward or away from a visual object. The local field potentials (LFPs) in this region also represent the reach goal when the reach is directed to a visual object. Thus PRR is a candidate area for reading out a patient's intended reach goals for neural prosthetic applications. For natural behaviors, reach goals are not always based on the location of a visual object, e.g., playing the piano following sheet music or moving following verbal directions. So far it has not been directly tested whether and how PRR represents reach goals in such cognitive, nonlocational conditions, and knowing the encoding properties in various task conditions would help in designing a reach goal decoder for prosthetic applications. To address this issue, we examined the macaque PRR under two reach conditions: reach goal determined by the stimulus location (direct) or shape (symbolic). For the same goal, the spiking activity near reach onset was indistinguishable between the two tasks, and thus a reach goal decoder trained with spiking activity in one task performed perfectly in the other. In contrast, the LFP activity at 20-40 Hz showed small but significantly enhanced reach goal tuning in the symbolic task, but its spatial preference remained the same. Consequently, a decoder trained with LFP activity performed worse in the other task than in the same task. These results suggest that LFP decoders in PRR should take into account the task context (e.g., locational vs. nonlocational) to be accurate, while spike decoders can robustly provide reach goal information regardless of the task context in various prosthetic applications.

  19. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    National Research Council Canada - National Science Library

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs...

  20. An FPGA Implementation of a Polychronous Spiking Neural Network with Delay Adaptation

    Science.gov (United States)

    Wang, Runchun; Cohen, Gregory; Stiefel, Klaus M.; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, André

    2013-01-01

    We present an FPGA implementation of a re-configurable, polychronous spiking neural network with a large capacity for spatial-temporal patterns. The proposed neural network generates delay paths de novo, so that only connections that actually appear in the training patterns will be created. This allows the proposed network to use all the axons (variables) to store information. Spike Timing Dependent Delay Plasticity is used to fine-tune and add dynamics to the network. We use a time multiplexing approach allowing us to achieve 4096 (4k) neurons and up to 1.15 million programmable delay axons on a Virtex 6 FPGA. Test results show that the proposed neural network is capable of successfully recalling more than 95% of all spikes for 96% of the stored patterns. The tests also show that the neural network is robust to noise from random input spikes. PMID:23408739

  1. Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study

    Directory of Open Access Journals (Sweden)

    Thomas Pfeil

    2016-05-01

    Full Text Available High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad

  2. Stochastic spike synchronization in a small-world neural network with spike-timing-dependent plasticity.

    Science.gov (United States)

    Kim, Sang-Yoon; Lim, Woochang

    2018-01-01

    We consider the Watts-Strogatz small-world network (SWN) consisting of subthreshold neurons which exhibit noise-induced spikings. This neuronal network has adaptive dynamic synaptic strengths governed by the spike-timing-dependent plasticity (STDP). In previous works without STDP, stochastic spike synchronization (SSS) between noise-induced spikings of subthreshold neurons was found to occur in a range of intermediate noise intensities. Here, we investigate the effect of additive STDP on the SSS by varying the noise intensity. Occurrence of a "Matthew" effect in synaptic plasticity is found due to a positive feedback process. As a result, good synchronization gets better via long-term potentiation of synaptic strengths, while bad synchronization gets worse via long-term depression. Emergences of long-term potentiation and long-term depression of synaptic strengths are intensively investigated via microscopic studies based on the pair-correlations between the pre- and the post-synaptic IISRs (instantaneous individual spike rates) as well as the distributions of time delays between the pre- and the post-synaptic spike times. Furthermore, the effects of multiplicative STDP (which depends on states) on the SSS are studied and discussed in comparison with the case of additive STDP (independent of states). These effects of STDP on the SSS in the SWN are also compared with those in the regular lattice and the random graph. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Spotting neural spike patterns using an adversary background model.

    Science.gov (United States)

    Gat, I; Tishby, N

    2001-12-01

    The detection of a specific stochastic pattern embedded in an unknown background noise is a difficult pattern recognition problem, encountered in many applications such as word spotting in speech. A similar problem emerges when trying to detect a multineural spike pattern in a single electrical recording, embedded in the complex cortical activity of a behaving animal. Solving this problem is crucial for the identification of neuronal code words with specific meaning. The technical difficulty of this detection is due to the lack of a good statistical model for the background activity, which rapidly changes with the recording conditions and activity of the animal. This work introduces the use of an adversary background model. This model assumes that the background "knows" the pattern sought, up to a first-order statistics, and this "knowledge" creates a background composed of all the permutations of our pattern. We show that this background model is tightly connected to the type-based information-theoretic approach. Furthermore, we show that computing the likelihood ratio is actually decomposing the log-likelihood distribution according to types of the empirical counts. We demonstrate the application of this method for detection of the reward patterns in the basal ganglia of behaving monkeys, yielding some unexpected biological results.

  4. Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data

    Science.gov (United States)

    Deng, Xinyi

    2016-08-01

    population spiking data. Lastly, we proposed a general three-step paradigm that allows us to relate behavioral outcomes of various tasks to simultaneously recorded neural activity across multiple brain areas, which is a step towards closed-loop therapies for psychological diseases using real-time neural stimulation. These methods are suitable for real-time implementation for content-based feedback experiments.

  5. An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data

    Directory of Open Access Journals (Sweden)

    Evangelos Stromatias

    2017-06-01

    Full Text Available This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77% and Poker-DVS (100% real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.

  6. An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data.

    Science.gov (United States)

    Stromatias, Evangelos; Soto, Miguel; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabé

    2017-01-01

    This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.

  7. On the Computational Power of Spiking Neural P Systems with Self-Organization

    Science.gov (United States)

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-06-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  8. A stochastic-field description of finite-size spiking neural networks.

    Science.gov (United States)

    Dumont, Grégory; Payeur, Alexandre; Longtin, André

    2017-08-01

    Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity-the density of active neurons per unit time-is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics.

  9. Synthesis of neural networks for spatio-temporal spike pattern recognition and processing

    Directory of Open Access Journals (Sweden)

    Jonathan C Tapson

    2013-08-01

    Full Text Available The advent of large scale neural computational platforms has highlighted the lack of algorithms for synthesis of neural structures to perform predefined cognitive tasks. The Neural Engineering Framework offers one such synthesis, but it is most effective for a spike rate representation of neural information, and it requires a large number of neurons to implement simple functions. We describe a neural network synthesis method that generates synaptic connectivity for neurons which process time-encoded neural signals, and which makes very sparse use of neurons. The method allows the user to specify – arbitrarily - neuronal characteristics such as axonal and dendritic delays, and synaptic transfer functions, and then solves for the optimal input-output relationship using computed dendritic weights. The method may be used for batch or online learning and has an extremely fast optimization process. We demonstrate its use in generating a network to recognize speech which is sparsely encoded as spike times.

  10. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity.

    Science.gov (United States)

    Srinivasa, Narayan; Cho, Youngkwan

    2014-01-01

    A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns-both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.

  11. Unsupervised Discrimination of Patterns in Spiking Neural Networks with Excitatory and Inhibitory Synaptic Plasticity

    Directory of Open Access Journals (Sweden)

    Narayan eSrinivasa

    2014-12-01

    Full Text Available A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns – both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.

  12. A Hardware-Efficient Scalable Spike Sorting Neural Signal Processor Module for Implantable High-Channel-Count Brain Machine Interfaces.

    Science.gov (United States)

    Yang, Yuning; Boling, Sam; Mason, Andrew J

    2017-08-01

    Next-generation brain machine interfaces demand a high-channel-count neural recording system to wirelessly monitor activities of thousands of neurons. A hardware efficient neural signal processor (NSP) is greatly desirable to ease the data bandwidth bottleneck for a fully implantable wireless neural recording system. This paper demonstrates a complete multichannel spike sorting NSP module that incorporates all of the necessary spike detector, feature extractor, and spike classifier blocks. To meet high-channel-count and implantability demands, each block was designed to be highly hardware efficient and scalable while sharing resources efficiently among multiple channels. To process multiple channels in parallel, scalability analysis was performed, and the utilization of each block was optimized according to its input data statistics and the power, area and/or speed of each block. Based on this analysis, a prototype 32-channel spike sorting NSP scalable module was designed and tested on an FPGA using synthesized datasets over a wide range of signal to noise ratios. The design was mapped to 130 nm CMOS to achieve 0.75 μW power and 0.023 mm2 area consumptions per channel based on post synthesis simulation results, which permits scalability of digital processing to 690 channels on a 4×4 mm2 electrode array.

  13. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule.

    Science.gov (United States)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-11-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  14. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin, E-mail: xmli@cqu.edu.cn [Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044 (China); College of Automation, Chongqing University, Chongqing 400044 (China)

    2015-11-15

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  15. Efficient sequential Bayesian inference method for real-time detection and sorting of overlapped neural spikes.

    Science.gov (United States)

    Haga, Tatsuya; Fukayama, Osamu; Takayama, Yuzo; Hoshino, Takayuki; Mabuchi, Kunihiko

    2013-09-30

    Overlapping of extracellularly recorded neural spike waveforms causes the original spike waveforms to become hidden and merged, confounding the real-time detection and sorting of these spikes. Methods proposed for solving this problem include using a multi-trode or placing a restriction on the complexity of overlaps. In this paper, we propose a rapid sequential method for the robust detection and sorting of arbitrarily overlapped spikes recorded with arbitrary types of electrodes. In our method, the probabilities of possible spike trains, including those that are overlapping, are evaluated by sequential Bayesian inference based on probabilistic models of spike-train generation and extracellular voltage recording. To reduce the high computational cost inherent in an exhaustive evaluation, candidates with low probabilities are considered as impossible candidates and are abolished at each sampling time to limit the number of candidates in the next evaluation. In addition, the data from a few subsequent sampling times are considered and used to calculate the "look-ahead probability", resulting in improved calculation efficiency due to a more rapid elimination of candidates. These sufficiently reduce computational time to enable real-time calculation without impairing performance. We assessed the performance of our method using simulated neural signals and actual neural signals recorded in primary cortical neurons cultured on a multi-electrode array. Our results demonstrated that our computational method could be applied in real-time with a delay of less than 10 ms. The estimation accuracy was higher than that of a conventional spike sorting method, particularly for signals with multiple overlapping spikes. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Spiking neural circuits with dendritic stimulus processors : encoding, decoding, and identification in reproducing kernel Hilbert spaces.

    Science.gov (United States)

    Lazar, Aurel A; Slutskiy, Yevgeniy B

    2015-02-01

    We present a multi-input multi-output neural circuit architecture for nonlinear processing and encoding of stimuli in the spike domain. In this architecture a bank of dendritic stimulus processors implements nonlinear transformations of multiple temporal or spatio-temporal signals such as spike trains or auditory and visual stimuli in the analog domain. Dendritic stimulus processors may act on both individual stimuli and on groups of stimuli, thereby executing complex computations that arise as a result of interactions between concurrently received signals. The results of the analog-domain computations are then encoded into a multi-dimensional spike train by a population of spiking neurons modeled as nonlinear dynamical systems. We investigate general conditions under which such circuits faithfully represent stimuli and demonstrate algorithms for (i) stimulus recovery, or decoding, and (ii) identification of dendritic stimulus processors from the observed spikes. Taken together, our results demonstrate a fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. This duality result enabled us to derive lower bounds on the number of experiments to be performed and the total number of spikes that need to be recorded for identifying a neural circuit.

  17. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    Science.gov (United States)

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  18. Structured chaos shapes spike-response noise entropy in balanced neural networks

    Directory of Open Access Journals (Sweden)

    Guillaume eLajoie

    2014-10-01

    Full Text Available Large networks of sparsely coupled, excitatory and inhibitory cells occur throughout the brain. For many models of these networks, a striking feature is that their dynamics are chaotic and thus, are sensitive to small perturbations. How does this chaos manifest in the neural code? Specifically, how variable are the spike patterns that such a network produces in response to an input signal? To answer this, we derive a bound for a general measure of variability -- spike-train entropy. This leads to important insights on the variability of multi-cell spike pattern distributions in large recurrent networks of spiking neurons responding to fluctuating inputs. The analysis is based on results from random dynamical systems theory and is complemented by detailed numerical simulations. We find that the spike pattern entropy is an order of magnitude lower than what would be extrapolated from single cells. This holds despite the fact that network coupling becomes vanishingly sparse as network size grows -- a phenomenon that depends on ``extensive chaos, as previously discovered for balanced networks without stimulus drive. Moreover, we show how spike pattern entropy is controlled by temporal features of the inputs. Our findings provide insight into how neural networks may encode stimuli in the presence of inherently chaotic dynamics.

  19. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning.

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung

    2018-02-01

    Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  20. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung

    2018-02-01

    Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  1. Exact computation of the Maximum Entropy Potential of spiking neural networks models

    CERN Document Server

    Cofre, Rodrigo

    2014-01-01

    Understanding how stimuli and synaptic connectivity in uence the statistics of spike patterns in neural networks is a central question in computational neuroscience. Maximum Entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. But, in spite of good performance in terms of prediction, the ?tting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuro-mimetic models) provide a probabilistic mapping between stimulus, network architecture and spike patterns in terms of conditional proba- bilities. In this paper we build an exact analytical mapping between neuro-mimetic and Maximum Entropy models.

  2. Hybrid Spintronic-CMOS Spiking Neural Network with On-Chip Learning: Devices, Circuits, and Systems

    Science.gov (United States)

    Sengupta, Abhronil; Banerjee, Aparajita; Roy, Kaushik

    2016-12-01

    Over the past decade, spiking neural networks (SNNs) have emerged as one of the popular architectures to emulate the brain. In SNNs, information is temporally encoded and communication between neurons is accomplished by means of spikes. In such networks, spike-timing-dependent plasticity mechanisms require the online programing of synapses based on the temporal information of spikes transmitted by spiking neurons. In this work, we propose a spintronic synapse with decoupled spike-transmission and programing-current paths. The spintronic synapse consists of a ferromagnet-heavy-metal heterostructure where the programing current through the heavy metal generates spin-orbit torque to modulate the device conductance. Low programing energy and fast programing times demonstrate the efficacy of the proposed device as a nanoelectronic synapse. We perform a simulation study based on an experimentally benchmarked device-simulation framework to demonstrate the interfacing of such spintronic synapses with CMOS neurons and learning circuits operating in the transistor subthreshold region to form a network of spiking neurons that can be utilized for pattern-recognition problems.

  3. Changes in complex spike activity during classical conditioning

    Directory of Open Access Journals (Sweden)

    Anders eRasmussen

    2014-08-01

    Full Text Available The cerebellar cortex is necessary for adaptively timed conditioned responses (CRs in eyeblink conditioning. During conditioning, Purkinje cells acquire pause responses or Purkinje cell CRs to the conditioned stimuli (CS, resulting in disinhibition of the cerebellar nuclei (CN, allowing them to activate motor nuclei that control eyeblinks. This disinhibition also causes inhibition of the inferior olive (IO, via the nucleo-olivary pathway (N-O. Activation of the IO, which relays the unconditional stimulus (US to the cortex, elicits characteristic complex spikes in Purkinje cells. Although Purkinje cell activity, as well as stimulation of the CN, is known to influence IO activity, much remains to be learned about the way that learned changes in simple spike firing affects the IO. In the present study, we analyzed changes in simple and complex spike firing, in extracellular Purkinje cell records, from the C3 zone, in decerebrate ferrets undergoing training in a conditioning paradigm. In agreement with the N-O feedback hypothesis, acquisition resulted in a gradual decrease in complex spike activity during the conditioned stimulus, with a delay that is consistent with the long N-O latency. Also supporting the feedback hypothesis, training with a short interstimulus interval (ISI, which does not lead to acquisition of a Purkinje cell CR, did not cause a suppression of complex spike activity. In contrast, observations that extinction did not lead to a recovery in complex spike activity and the irregular patterns of simple and complex spike activity after the conditioned stimulus are less conclusive.

  4. Refinement and Pattern Formation in Neural Circuits by the Interaction of Traveling Waves with Spike-Timing Dependent Plasticity

    Science.gov (United States)

    Bennett, James E. M.; Bair, Wyeth

    2015-01-01

    Traveling waves in the developing brain are a prominent source of highly correlated spiking activity that may instruct the refinement of neural circuits. A candidate mechanism for mediating such refinement is spike-timing dependent plasticity (STDP), which translates correlated activity patterns into changes in synaptic strength. To assess the potential of these phenomena to build useful structure in developing neural circuits, we examined the interaction of wave activity with STDP rules in simple, biologically plausible models of spiking neurons. We derive an expression for the synaptic strength dynamics showing that, by mapping the time dependence of STDP into spatial interactions, traveling waves can build periodic synaptic connectivity patterns into feedforward circuits with a broad class of experimentally observed STDP rules. The spatial scale of the connectivity patterns increases with wave speed and STDP time constants. We verify these results with simulations and demonstrate their robustness to likely sources of noise. We show how this pattern formation ability, which is analogous to solutions of reaction-diffusion systems that have been widely applied to biological pattern formation, can be harnessed to instruct the refinement of postsynaptic receptive fields. Our results hold for rich, complex wave patterns in two dimensions and over several orders of magnitude in wave speeds and STDP time constants, and they provide predictions that can be tested under existing experimental paradigms. Our model generalizes across brain areas and STDP rules, allowing broad application to the ubiquitous occurrence of traveling waves and to wave-like activity patterns induced by moving stimuli. PMID:26308406

  5. Closed-loop optical neural stimulation based on a 32-channel low-noise recording system with online spike sorting

    Science.gov (United States)

    Nguyen, T. K. T.; Navratilova, Z.; Cabral, H.; Wang, L.; Gielen, G.; Battaglia, F. P.; Bartic, C.

    2014-08-01

    Objective. Closed-loop operation of neuro-electronic systems is desirable for both scientific and clinical (neuroprosthesis) applications. Integrating optical stimulation with recording capability further enhances the selectivity of neural stimulation. We have developed a system enabling the local delivery of optical stimuli and the simultaneous electrical measuring of the neural activities in a closed-loop approach. Approach. The signal analysis is performed online through the implementation of a template matching algorithm. The system performance is demonstrated with the recorded data and in awake rats. Main results. Specifically, the neural activities are simultaneously recorded, detected, classified online (through spike sorting) from 32 channels, and used to trigger a light emitting diode light source using generated TTL signals. Significance. A total processing time of 8 ms is achieved, suitable for optogenetic studies of brain mechanisms online.

  6. FPGA Implementation of Self-Organized Spiking Neural Network Controller for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Fangzheng Xue

    2014-06-01

    Full Text Available Spiking neural network, a computational model which uses spikes to process the information, is good candidate for mobile robot controller. In this paper, we present a novel mechanism for controlling mobile robots based on self-organized spiking neural network (SOSNN and introduce a method for FPGA implementation of this SOSNN. The spiking neuron we used is Izhikevich model. A key feature of this controller is that it can simulate the process of unconditioned reflex (avoid obstacles using infrared sensor signals and conditioned reflex (make right choices in multiple T-maze by spike timing-dependent plasticity (STDP learning and dopamine-receptor modulation. Experimental results show that the proposed controller is effective and is easy to implement. The FPGA implementation method aims to build up a specific network using generic blocks designed in the MATLAB Simulink environment. The main characteristics of this original solution are: on-chip learning algorithm implementation, high reconfiguration capability, and operation under real time constraints. An extended analysis has been carried out on the hardware resources used to implement the whole SOSNN network, as well as each individual component block.

  7. An FPGA hardware/software co-design towards evolvable spiking neural networks for robotics application.

    Science.gov (United States)

    Johnston, S P; Prasad, G; Maguire, L; McGinnity, T M

    2010-12-01

    This paper presents an approach that permits the effective hardware realization of a novel Evolvable Spiking Neural Network (ESNN) paradigm on Field Programmable Gate Arrays (FPGAs). The ESNN possesses a hybrid learning algorithm that consists of a Spike Timing Dependent Plasticity (STDP) mechanism fused with a Genetic Algorithm (GA). The design and implementation direction utilizes the latest advancements in FPGA technology to provide a partitioned hardware/software co-design solution. The approach achieves the maximum FPGA flexibility obtainable for the ESNN paradigm. The algorithm was applied as an embedded intelligent system robotic controller to solve an autonomous navigation and obstacle avoidance problem.

  8. Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.

    Science.gov (United States)

    Woodward, Alexander; Froese, Tom; Ikegami, Takashi

    2015-02-01

    The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    Science.gov (United States)

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  10. Dynamic Control of Synchronous Activity in Networks of Spiking Neurons.

    Directory of Open Access Journals (Sweden)

    Axel Hutt

    Full Text Available Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.

  11. Silicon synaptic transistor for hardware-based spiking neural network and neuromorphic system

    Science.gov (United States)

    Kim, Hyungjin; Hwang, Sungmin; Park, Jungjin; Park, Byung-Gook

    2017-10-01

    Brain-inspired neuromorphic systems have attracted much attention as new computing paradigms for power-efficient computation. Here, we report a silicon synaptic transistor with two electrically independent gates to realize a hardware-based neural network system without any switching components. The spike-timing dependent plasticity characteristics of the synaptic devices are measured and analyzed. With the help of the device model based on the measured data, the pattern recognition capability of the hardware-based spiking neural network systems is demonstrated using the modified national institute of standards and technology handwritten dataset. By comparing systems with and without inhibitory synapse part, it is confirmed that the inhibitory synapse part is an essential element in obtaining effective and high pattern classification capability.

  12. [Research on UKF control of epileptic-form spikes in neural mass models].

    Science.gov (United States)

    Liu, Xian; Ma, Baiwang; Ji, June; Li, Xiaoli

    2013-12-01

    Neural mass models are able to produce epileptic electroencephalogram (EEG) signals in different stages of seizures. The models play important roles in studying the mechanism analysis and control of epileptic seizures. In this study, the closed-loop feedback control was used to suppress the epileptic-form spikes in the neural mass models. It was expected to provide certain theory basis for the choice of stimulus position and parameter in the clinical treatment. With the influence of measurement noise taken into account, an unscented Kalman filter (UKF) was added to the feedback loop to estimate the system state and an UKF controller was constructed via the estimated state. The control action was imposed on the hyper-excitable population and all populations respectively in simulations. It was shown that both UKF control schemes suppressed the epileptic-form spikes in the model. However, the control energy needed in the latter scheme was less than that needed in the former one.

  13. Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Panda, Priyadarshini; Roy, Kaushik

    2017-01-01

    Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations.

  14. CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties

    Science.gov (United States)

    2017-03-01

    uncertainties. We discuss a ‘scenario generation’ circuit to non- parametrically estimate/emulate statistics of uncertain cost/constraints...are explored: (1) We discuss a ‘scenario generation’ circuit to non- parametrically estimate and emulate statistics of uncertain cost/constraints...uncertainties. The discussed mixed-signal, CMOS-based architecture of stochastically spiking neural network minimizes area/power of each cell and

  15. [Robustness analysis of adaptive neural network model based on spike timing-dependent plasticity].

    Science.gov (United States)

    Chen, Yunzhi; Xu, Guizhi; Zhou, Qian; Guo, Miaomiao; Guo, Lei; Wan, Xiaowei

    2015-02-01

    To explore the self-organization robustness of the biological neural network, and thus to provide new ideas and methods for the electromagnetic bionic protection, we studied both the information transmission mechanism of neural network and spike timing-dependent plasticity (STDP) mechanism, and then investigated the relationship between synaptic plastic and adaptive characteristic of biology. Then a feedforward neural network with the Izhikevich model and the STDP mechanism was constructed, and the adaptive robust capacity of the network was analyzed. Simulation results showed that the neural network based on STDP mechanism had good rubustness capacity, and this characteristics is closely related to the STDP mechanisms. Based on this simulation work, the cell circuit with neurons and synaptic circuit which can simulate the information processing mechanisms of biological nervous system will be further built, then the electronic circuits with adaptive robustness will be designed based on the cell circuit.

  16. Robust working memory in an asynchronously spiking neural network realized in neuromorphic VLSI

    Directory of Open Access Journals (Sweden)

    Massimiliano eGiulioni

    2012-02-01

    Full Text Available We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory of integrate-and-fire (LIF neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of ‘high’ and ‘low’-firing activity. Depending on the overall excitability, transitions to the ‘high’ state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the ‘high’ state retains a working memory of a stimulus until well after its release. In the latter case, ‘high’ states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated corrupted ‘high’ states comprising neurons of both excitatory populations. Within a basin of attraction, the network dynamics corrects such states and re-establishes the prototypical ‘high’ state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  17. A new EC-PC threshold estimation method for in vivo neural spike detection

    Science.gov (United States)

    Yang, Zhi; Liu, Wentai; Keshtkaran, Mohammad Reza; Zhou, Yin; Xu, Jian; Pikov, Victor; Guan, Cuntai; Lian, Yong

    2012-08-01

    This paper models in vivo neural signals and noise for extracellular spike detection. Although the recorded data approximately follow Gaussian distribution, they clearly deviate from white Gaussian noise due to neuronal synchronization and sparse distribution of spike energy. Our study predicts the coexistence of two components embedded in neural data dynamics, one in the exponential form (noise) and the other in the power form (neural spikes). The prediction of the two components has been confirmed in experiments of in vivo sequences recorded from the hippocampus, cortex surface, and spinal cord; both acute and long-term recordings; and sleep and awake states. These two components are further used as references for threshold estimation. Different from the conventional wisdom of setting a threshold at 3×RMS, the estimated threshold exhibits a significant variation. When our algorithm was tested on synthesized sequences with a different signal to noise ratio and on/off firing dynamics, inferred threshold statistics track the benchmarks well. We envision that this work may be applied to a wide range of experiments as a front-end data analysis tool.

  18. PAX: A mixed hardware/software simulation platform for spiking neural networks.

    Science.gov (United States)

    Renaud, S; Tomas, J; Lewis, N; Bornat, Y; Daouzli, A; Rudolph, M; Destexhe, A; Saïghi, S

    2010-09-01

    Many hardware-based solutions now exist for the simulation of bio-like neural networks. Less conventional than software-based systems, these types of simulators generally combine digital and analog forms of computation. In this paper we present a mixed hardware-software platform, specifically designed for the simulation of spiking neural networks, using conductance-based models of neurons and synaptic connections with dynamic adaptation rules (Spike-Timing-Dependent Plasticity). The neurons and networks are configurable, and are computed in 'biological real time' by which we mean that the difference between simulated time and simulation time is guaranteed lower than 50 mus. After presenting the issues and context involved in the design and use of hardware-based spiking neural networks, we describe the analog neuromimetic integrated circuits which form the core of the platform. We then explain the organization and computation principles of the modules within the platform, and present experimental results which validate the system. Designed as a tool for computational neuroscience, the platform is exploited in collaborative research projects together with neurobiology and computer science partners. Copyright 2010 Elsevier Ltd. All rights reserved.

  19. Topological dynamics in spike-timing dependent plastic model neural networks

    Directory of Open Access Journals (Sweden)

    David B. Stone

    2013-04-01

    Full Text Available Spike-timing dependent plasticity (STDP is a biologically constrained unsupervised form of learning that potentiates or depresses synaptic connections based on the precise timing of pre-synaptic and post-synaptic firings. The effects of on-going STDP on the topology of evolving model neural networks were assessed in 50 unique simulations which modeled two hours of activity. After a period of stabilization, a number of global and local topological features were monitored periodically to quantify on-going changes in network structure. Global topological features included the total number of remaining synapses, average synaptic strengths, and average number of synapses per neuron (degree. Under a range of different input regimes and initial network configurations, each network maintained a robust and highly stable global structure across time. Local topology was monitored by assessing state changes of all three-neuron subgraphs (triads present in the networks. Overall counts and the range of triad configurations varied little across the simulations; however, a substantial set of individual triads continued to undergo rapid state changes and revealed a dynamic local topology. In addition, specific small-world properties also fluctuated across time. These findings suggest that on-going STDP provides an efficient means of selecting and maintaining a stable yet flexible network organization.

  20. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    Science.gov (United States)

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  1. Technology-aware algorithm design for neural spike detection, feature extraction, and dimensionality reduction.

    Science.gov (United States)

    Gibson, Sarah; Judy, Jack W; Marković, Dejan

    2010-10-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to 1) obtain single-unit activity and 2) perform data reduction for wireless data transmission. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection, feature-extraction, and dimensionality-reduction algorithms for spike sorting are described and evaluated in terms of accuracy versus complexity. The nonlinear energy operator is chosen as the optimal spike-detection algorithm, being most robust over noise and relatively simple. Discrete derivatives is chosen as the optimal feature-extraction method, maintaining high accuracy across signal-to-noise ratios with a complexity orders of magnitude less than that of traditional methods such as principal-component analysis. We introduce the maximum-difference algorithm, which is shown to be the best dimensionality-reduction method for hardware spike sorting.

  2. Stochastic Spiking Neural Networks Enabled by Magnetic Tunnel Junctions: From Nontelegraphic to Telegraphic Switching Regimes

    Science.gov (United States)

    Liyanagedera, Chamika M.; Sengupta, Abhronil; Jaiswal, Akhilesh; Roy, Kaushik

    2017-12-01

    Stochastic spiking neural networks based on nanoelectronic spin devices can be a possible pathway to achieving "brainlike" compact and energy-efficient cognitive intelligence. The computational model attempt to exploit the intrinsic device stochasticity of nanoelectronic synaptic or neural components to perform learning or inference. However, there has been limited analysis on the scaling effect of stochastic spin devices and its impact on the operation of such stochastic networks at the system level. This work attempts to explore the design space and analyze the performance of nanomagnet-based stochastic neuromorphic computing architectures for magnets with different barrier heights. We illustrate how the underlying network architecture must be modified to account for the random telegraphic switching behavior displayed by magnets with low barrier heights as they are scaled into the superparamagnetic regime. We perform a device-to-system-level analysis on a deep neural-network architecture for a digit-recognition problem on the MNIST data set.

  3. The role of cortical oscillations in a spiking neural network model of the basal ganglia.

    Directory of Open Access Journals (Sweden)

    Zafeirios Fountas

    Full Text Available Although brain oscillations involving the basal ganglia (BG have been the target of extensive research, the main focus lies disproportionally on oscillations generated within the BG circuit rather than other sources, such as cortical areas. We remedy this here by investigating the influence of various cortical frequency bands on the intrinsic effective connectivity of the BG, as well as the role of the latter in regulating cortical behaviour. To do this, we construct a detailed neural model of the complete BG circuit based on fine-tuned spiking neurons, with both electrical and chemical synapses as well as short-term plasticity between structures. As a measure of effective connectivity, we estimate information transfer between nuclei by means of transfer entropy. Our model successfully reproduces firing and oscillatory behaviour found in both the healthy and Parkinsonian BG. We found that, indeed, effective connectivity changes dramatically for different cortical frequency bands and phase offsets, which are able to modulate (or even block information flow in the three major BG pathways. In particular, alpha (8-12Hz and beta (13-30Hz oscillations activate the direct BG pathway, and favour the modulation of the indirect and hyper-direct pathways via the subthalamic nucleus-globus pallidus loop. In contrast, gamma (30-90Hz frequencies block the information flow from the cortex completely through activation of the indirect pathway. Finally, below alpha, all pathways decay gradually and the system gives rise to spontaneous activity generated in the globus pallidus. Our results indicate the existence of a multimodal gating mechanism at the level of the BG that can be entirely controlled by cortical oscillations, and provide evidence for the hypothesis of cortically-entrained but locally-generated subthalamic beta activity. These two findings suggest new insights into the pathophysiology of specific BG disorders.

  4. The role of cortical oscillations in a spiking neural network model of the basal ganglia.

    Science.gov (United States)

    Fountas, Zafeirios; Shanahan, Murray

    2017-01-01

    Although brain oscillations involving the basal ganglia (BG) have been the target of extensive research, the main focus lies disproportionally on oscillations generated within the BG circuit rather than other sources, such as cortical areas. We remedy this here by investigating the influence of various cortical frequency bands on the intrinsic effective connectivity of the BG, as well as the role of the latter in regulating cortical behaviour. To do this, we construct a detailed neural model of the complete BG circuit based on fine-tuned spiking neurons, with both electrical and chemical synapses as well as short-term plasticity between structures. As a measure of effective connectivity, we estimate information transfer between nuclei by means of transfer entropy. Our model successfully reproduces firing and oscillatory behaviour found in both the healthy and Parkinsonian BG. We found that, indeed, effective connectivity changes dramatically for different cortical frequency bands and phase offsets, which are able to modulate (or even block) information flow in the three major BG pathways. In particular, alpha (8-12Hz) and beta (13-30Hz) oscillations activate the direct BG pathway, and favour the modulation of the indirect and hyper-direct pathways via the subthalamic nucleus-globus pallidus loop. In contrast, gamma (30-90Hz) frequencies block the information flow from the cortex completely through activation of the indirect pathway. Finally, below alpha, all pathways decay gradually and the system gives rise to spontaneous activity generated in the globus pallidus. Our results indicate the existence of a multimodal gating mechanism at the level of the BG that can be entirely controlled by cortical oscillations, and provide evidence for the hypothesis of cortically-entrained but locally-generated subthalamic beta activity. These two findings suggest new insights into the pathophysiology of specific BG disorders.

  5. Spiking Neural Network With Distributed Plasticity Reproduces Cerebellar Learning in Eye Blink Conditioning Paradigms.

    Science.gov (United States)

    Antonietti, Alberto; Casellato, Claudia; Garrido, Jesús A; Luque, Niceto R; Naveros, Francisco; Ros, Eduardo; D' Angelo, Egidio; Pedrocchi, Alessandra

    2016-01-01

    In this study, we defined a realistic cerebellar model through the use of artificial spiking neural networks, testing it in computational simulations that reproduce associative motor tasks in multiple sessions of acquisition and extinction. By evolutionary algorithms, we tuned the cerebellar microcircuit to find out the near-optimal plasticity mechanism parameters that better reproduced human-like behavior in eye blink classical conditioning, one of the most extensively studied paradigms related to the cerebellum. We used two models: one with only the cortical plasticity and another including two additional plasticity sites at nuclear level. First, both spiking cerebellar models were able to well reproduce the real human behaviors, in terms of both "timing" and "amplitude", expressing rapid acquisition, stable late acquisition, rapid extinction, and faster reacquisition of an associative motor task. Even though the model with only the cortical plasticity site showed good learning capabilities, the model with distributed plasticity produced faster and more stable acquisition of conditioned responses in the reacquisition phase. This behavior is explained by the effect of the nuclear plasticities, which have slow dynamics and can express memory consolidation and saving. We showed how the spiking dynamics of multiple interactive neural mechanisms implicitly drive multiple essential components of complex learning processes.  This study presents a very advanced computational model, developed together by biomedical engineers, computer scientists, and neuroscientists. Since its realistic features, the proposed model can provide confirmations and suggestions about neurophysiological and pathological hypotheses and can be used in challenging clinical applications.

  6. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Science.gov (United States)

    Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil

    2016-10-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  7. A Spiking Neural Network Based Cortex-Like Mechanism and Application to Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Si-Yao Fu

    2012-01-01

    Full Text Available In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs. By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people’s facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.

  8. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Directory of Open Access Journals (Sweden)

    Yoonsik Shim

    2016-10-01

    Full Text Available We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP. The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  9. Evolving spiking neural networks: a novel growth algorithm exhibits unintelligent design

    Science.gov (United States)

    Schaffer, J. David

    2015-06-01

    Spiking neural networks (SNNs) have drawn considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. Experiments show the algorithm producing SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. In addition, the output spike patterns retain evidence of the specific perturbation of the inputs, a feature that could be exploited by network additions that could use this information for refined decision making if required. On a second task, a sequence detector, a discriminating design was found that might be considered an example of "unintelligent design"; extra non-functional neurons were included that, while inefficient, did not hamper its proper functioning.

  10. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  11. Spiking in auditory cortex following thalamic stimulation is dominated by cortical network activity

    Science.gov (United States)

    Krause, Bryan M.; Raz, Aeyal; Uhlrich, Daniel J.; Smith, Philip H.; Banks, Matthew I.

    2014-01-01

    The state of the sensory cortical network can have a profound impact on neural responses and perception. In rodent auditory cortex, sensory responses are reported to occur in the context of network events, similar to brief UP states, that produce “packets” of spikes and are associated with synchronized synaptic input (Bathellier et al., 2012; Hromadka et al., 2013; Luczak et al., 2013). However, traditional models based on data from visual and somatosensory cortex predict that ascending sensory thalamocortical (TC) pathways sequentially activate cells in layers 4 (L4), L2/3, and L5. The relationship between these two spatio-temporal activity patterns is unclear. Here, we used calcium imaging and electrophysiological recordings in murine auditory TC brain slices to investigate the laminar response pattern to stimulation of TC afferents. We show that although monosynaptically driven spiking in response to TC afferents occurs, the vast majority of spikes fired following TC stimulation occurs during brief UP states and outside the context of the L4>L2/3>L5 activation sequence. Specifically, monosynaptic subthreshold TC responses with similar latencies were observed throughout layers 2–6, presumably via synapses onto dendritic processes located in L3 and L4. However, monosynaptic spiking was rare, and occurred primarily in L4 and L5 non-pyramidal cells. By contrast, during brief, TC-induced UP states, spiking was dense and occurred primarily in pyramidal cells. These network events always involved infragranular layers, whereas involvement of supragranular layers was variable. During UP states, spike latencies were comparable between infragranular and supragranular cells. These data are consistent with a model in which activation of auditory cortex, especially supragranular layers, depends on internally generated network events that represent a non-linear amplification process, are initiated by infragranular cells and tightly regulated by feed-forward inhibitory

  12. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    Directory of Open Access Journals (Sweden)

    Kit eCheung

    2016-01-01

    Full Text Available NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs. Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimised performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP rule for learning. A 6-FPGA system can simulate a network of up to approximately 600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  13. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    Science.gov (United States)

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  14. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    Science.gov (United States)

    Cheung, Kit; Schultz, Simon R.; Luk, Wayne

    2016-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  15. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    Science.gov (United States)

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  16. Polychronization: computation with spikes.

    Science.gov (United States)

    Izhikevich, Eugene M

    2006-02-01

    We present a minimal spiking network that can polychronize, that is, exhibit reproducible time-locked but not synchronous firing patterns with millisecond precision, as in synfire braids. The network consists of cortical spiking neurons with axonal conduction delays and spike-timing-dependent plasticity (STDP); a ready-to-use MATLAB code is included. It exhibits sleeplike oscillations, gamma (40 Hz) rhythms, conversion of firing rates to spike timings, and other interesting regimes. Due to the interplay between the delays and STDP, the spiking neurons spontaneously self-organize into groups and generate patterns of stereotypical polychronous activity. To our surprise, the number of coexisting polychronous groups far exceeds the number of neurons in the network, resulting in an unprecedented memory capacity of the system. We speculate on the significance of polychrony to the theory of neuronal group selection (TNGS, neural Darwinism), cognitive neural computations, binding and gamma rhythm, mechanisms of attention, and consciousness as "attention to memories."

  17. Spiking Activity of a LIF Neuron in Distributed Delay Framework

    Directory of Open Access Journals (Sweden)

    Saket Kumar Choudhary

    2016-06-01

    Full Text Available Evolution of membrane potential and spiking activity for a single leaky integrate-and-fire (LIF neuron in distributed delay framework (DDF is investigated. DDF provides a mechanism to incorporate memory element in terms of delay (kernel function into a single neuron models. This investigation includes LIF neuron model with two different kinds of delay kernel functions, namely, gamma distributed delay kernel function and hypo-exponential distributed delay kernel function. Evolution of membrane potential for considered models is studied in terms of stationary state probability distribution (SPD. Stationary state probability distribution of membrane potential (SPDV for considered neuron models are found asymptotically similar which is Gaussian distributed. In order to investigate the effect of membrane potential delay, rate code scheme for neuronal information processing is applied. Firing rate and Fano-factor for considered neuron models are calculated and standard LIF model is used for comparative study. It is noticed that distributed delay increases the spiking activity of a neuron. Increase in spiking activity of neuron in DDF is larger for hypo-exponential distributed delay function than gamma distributed delay function. Moreover, in case of hypo-exponential delay function, a LIF neuron generates spikes with Fano-factor less than 1.

  18. Self-organized noise resistance of oscillatory neural networks with spike timing-dependent plasticity.

    Science.gov (United States)

    Popovych, Oleksandr V; Yanchuk, Serhiy; Tass, Peter A

    2013-10-11

    Intuitively one might expect independent noise to be a powerful tool for desynchronizing a population of synchronized neurons. We here show that, intriguingly, for oscillatory neural populations with adaptive synaptic weights governed by spike timing-dependent plasticity (STDP) the opposite is true. We found that the mean synaptic coupling in such systems increases dynamically in response to the increase of the noise intensity, and there is an optimal noise level, where the amount of synaptic coupling gets maximal in a resonance-like manner as found for the stochastic or coherence resonances, although the mechanism in our case is different. This constitutes a noise-induced self-organization of the synaptic connectivity, which effectively counteracts the desynchronizing impact of independent noise over a wide range of the noise intensity. Given the attempts to counteract neural synchrony underlying tinnitus with noisers and maskers, our results may be of clinical relevance.

  19. Spike-frequency adapting neural ensembles: beyond mean adaptation and renewal theories.

    Science.gov (United States)

    Muller, Eilif; Buesing, Lars; Schemmel, Johannes; Meier, Karlheinz

    2007-11-01

    We propose a Markov process model for spike-frequency adapting neural ensembles that synthesizes existing mean-adaptation approaches, population density methods, and inhomogeneous renewal theory, resulting in a unified and tractable framework that goes beyond renewal and mean-adaptation theories by accounting for correlations between subsequent interspike intervals. A method for efficiently generating inhomogeneous realizations of the proposed Markov process is given, numerical methods for solving the population equation are presented, and an expression for the first-order interspike interval correlation is derived. Further, we show that the full five-dimensional master equation for a conductance-based integrate-and-fire neuron with spike-frequency adaptation and a relative refractory mechanism driven by Poisson spike trains can be reduced to a two-dimensional generalization of the proposed Markov process by an adiabatic elimination of fast variables. For static and dynamic stimulation, negative serial interspike interval correlations and transient population responses, respectively, of Monte Carlo simulations of the full five-dimensional system can be accurately described by the proposed two-dimensional Markov process.

  20. Mesoscale-duration activated states gate spiking in response to fast rises in membrane voltage in the awake brain.

    Science.gov (United States)

    Singer, Annabelle C; Talei Franzesi, Giovanni; Kodandaramaiah, Suhasa B; Flores, Francisco J; Cohen, Jeremy D; Lee, Albert K; Borgers, Christoph; Forest, Craig R; Kopell, Nancy J; Boyden, Edward S

    2017-08-01

    Seconds-scale network states, affecting many neurons within a network, modulate neural activity by complementing fast integration of neuron-specific inputs that arrive in the milliseconds before spiking. Nonrhythmic subthreshold dynamics at intermediate timescales, however, are less well characterized. We found, using automated whole cell patch clamping in vivo, that spikes recorded in CA1 and barrel cortex in awake mice are often preceded not only by monotonic voltage rises lasting milliseconds but also by more gradual (lasting tens to hundreds of milliseconds) depolarizations. The latter exert a gating function on spiking, in a fashion that depends on the gradual rise duration: the probability of spiking was higher for longer gradual rises, even when controlled for the amplitude of the gradual rises. Barrel cortex double-autopatch recordings show that gradual rises are shared across some, but not all, neurons. The gradual rises may represent a new kind of state, intermediate both in timescale and in proportion of neurons participating, which gates a neuron's ability to respond to subsequent inputs.NEW & NOTEWORTHY We analyzed subthreshold activity preceding spikes in hippocampus and barrel cortex of awake mice. Aperiodic voltage ramps extending over tens to hundreds of milliseconds consistently precede and facilitate spikes, in a manner dependent on both their amplitude and their duration. These voltage ramps represent a "mesoscale" activated state that gates spike production in vivo. Copyright © 2017 the American Physiological Society.

  1. Emergence of slow collective oscillations in neural networks with spike-timing dependent plasticity

    DEFF Research Database (Denmark)

    Mikkelsen, Kaare; Imparato, Alberto; Torcini, Alessandro

    2013-01-01

    The collective dynamics of excitatory pulse coupled neurons with spike timing dependent plasticity (STDP) is studied. The introduction of STDP induces persistent irregular oscillations between strongly and weakly synchronized states, reminiscent of brain activity during slow-wave sleep. We explain...

  2. A non-parametric Bayesian approach for clustering and tracking non-stationarities of neural spikes.

    Science.gov (United States)

    Shalchyan, Vahid; Farina, Dario

    2014-02-15

    Neural spikes from multiple neurons recorded in a multi-unit signal are usually separated by clustering. Drifts in the position of the recording electrode relative to the neurons over time cause gradual changes in the position and shapes of the clusters, challenging the clustering task. By dividing the data into short time intervals, Bayesian tracking of the clusters based on Gaussian cluster model has been previously proposed. However, the Gaussian cluster model is often not verified for neural spikes. We present a Bayesian clustering approach that makes no assumptions on the distribution of the clusters and use kernel-based density estimation of the clusters in every time interval as a prior for Bayesian classification of the data in the subsequent time interval. The proposed method was tested and compared to Gaussian model-based approach for cluster tracking by using both simulated and experimental datasets. The results showed that the proposed non-parametric kernel-based density estimation of the clusters outperformed the sequential Gaussian model fitting in both simulated and experimental data tests. Using non-parametric kernel density-based clustering that makes no assumptions on the distribution of the clusters enhances the ability of tracking cluster non-stationarity over time with respect to the Gaussian cluster modeling approach. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Spike-timing computation properties of a feed-forward neural network model

    Directory of Open Access Journals (Sweden)

    Drew Benjamin Sinha

    2014-01-01

    Full Text Available Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g. serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.

  4. Synchronous digital implementation of the AER communication scheme for emulating large-scale spiking neural networks models

    OpenAIRE

    Moreno Aróstegui, Juan Manuel; Madrenas Boadas, Jordi; Kotynia, L.

    2009-01-01

    In this paper we shall present a fully synchronous digital implementation of the Address Event Representation (AER) communication scheme that has been used in the PERPLEXUS chip in order to permit the emulation of large-scale biologically inspired spiking neural networks models. By introducing specific commands in the AER protocol it is possible to distribute the AER bus among a large number of chips where the functionality of the spiking neurons is being emulated. A c...

  5. EPILEPTIC ENCEPHALOPATHY WITH CONTINUOUS SPIKES-WAVES ACTIVITY DURING SLEEP

    Directory of Open Access Journals (Sweden)

    E. D. Belousova

    2012-01-01

    Full Text Available The author represents the review and discussion of current scientific literature devoted to epileptic encephalopathy with continuous spikes-waves activity during sleep — the special form of partly reversible age-dependent epileptic encephalopathy, characterized by triad of symptoms: continuous prolonged epileptiform (spike-wave activity on EEG in sleep, epileptic seizures and cognitive disorders. The author describes the aspects of classification, pathogenesis and etiology, prevalence, clinical picture and diagnostics of this disorder, including the peculiar anomalies on EEG. The especial attention is given to approaches to the treatment of epileptic encephalopathy with continuous spikeswaves activity during sleep. Efficacy of valproates, corticosteroid hormones and antiepileptic drugs of other groups is considered. The author represents own experience of treatment this disorder with corticosteroids, scheme of therapy and assessment of efficacy.

  6. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Directory of Open Access Journals (Sweden)

    Varsha H. Rallapalli

    2016-10-01

    Full Text Available Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM has demonstrated that the signal-to-noise ratio (SNRENV from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N is assumed to: (a reduce S + N envelope power by filling in dips within clean speech (S and (b introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  7. Functional connectivity among spike trains in neural assemblies during rat working memory task.

    Science.gov (United States)

    Xie, Jiacun; Bai, Wenwen; Liu, Tiaotiao; Tian, Xin

    2014-11-01

    Working memory refers to a brain system that provides temporary storage to manipulate information for complex cognitive tasks. As the brain is a more complex, dynamic and interwoven network of connections and interactions, the questions raised here: how to investigate the mechanism of working memory from the view of functional connectivity in brain network? How to present most characteristic features of functional connectivity in a low-dimensional network? To address these questions, we recorded the spike trains in prefrontal cortex with multi-electrodes when rats performed a working memory task in Y-maze. The functional connectivity matrix among spike trains was calculated via maximum likelihood estimation (MLE). The average connectivity value Cc, mean of the matrix, was calculated and used to describe connectivity strength quantitatively. The spike network was constructed by the functional connectivity matrix. The information transfer efficiency Eglob was calculated and used to present the features of the network. In order to establish a low-dimensional spike network, the active neurons with higher firing rates than average rate were selected based on sparse coding. The results show that the connectivity Cc and the network transfer efficiency Eglob vaired with time during the task. The maximum values of Cc and Eglob were prior to the working memory behavior reference point. Comparing with the results in the original network, the feature network could present more characteristic features of functional connectivity. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. A spiking neural network model of model-free reinforcement learning with high-dimensional sensory input and perceptual ambiguity.

    Science.gov (United States)

    Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji

    2015-01-01

    A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.

  9. Extraction and characterization of essential discharge patterns from multisite recordings of spiking ongoing activity.

    Directory of Open Access Journals (Sweden)

    Riccardo Storchi

    Full Text Available BACKGROUND: Neural activation patterns proceed often by schemes or motifs distributed across the involved cortical networks. As neurons are correlated, the estimate of all possible dependencies quickly goes out of control. The complex nesting of different oscillation frequencies and their high non-stationariety further hamper any quantitative evaluation of spiking network activities. The problem is exacerbated by the intrinsic variability of neural patterns. METHODOLOGY/PRINCIPAL FINDINGS: Our technique introduces two important novelties and enables to insulate essential patterns on larger sets of spiking neurons and brain activity regimes. First, the sampling procedure over N units is based on a fixed spike number k in order to detect N-dimensional arrays (k-sequences, whose sum over all dimension is k. Then k-sequences variability is greatly reduced by a hierarchical separative clustering, that assigns large amounts of distinct k-sequences to few classes. Iterative separations are stopped when the dimension of each cluster comes to be smaller than a certain threshold. As threshold tuning critically impacts on the number of classes extracted, we developed an effective cost criterion to select the shortest possible description of our dataset. Finally we described three indexes (C,S,R to evaluate the average pattern complexity, the structure of essential classes and their stability in time. CONCLUSIONS/SIGNIFICANCE: We validated this algorithm with four kinds of surrogated activity, ranging from random to very regular patterned. Then we characterized a selection of ongoing activity recordings. By the S index we identified unstable, moderatly and strongly stable patterns while by the C and the R indices we evidenced their non-random structure. Our algorithm seems able to extract interesting and non-trivial spatial dynamics from multisource neuronal recordings of ongoing and potentially stimulated activity. Combined with time-frequency analysis of

  10. Anti-correlations in the degree distribution increase stimulus detection performance in noisy spiking neural networks.

    Science.gov (United States)

    Martens, Marijn B; Houweling, Arthur R; E Tiesinga, Paul H

    2017-02-01

    Neuronal circuits in the rodent barrel cortex are characterized by stable low firing rates. However, recent experiments show that short spike trains elicited by electrical stimulation in single neurons can induce behavioral responses. Hence, the underlying neural networks provide stability against internal fluctuations in the firing rate, while simultaneously making the circuits sensitive to small external perturbations. Here we studied whether stability and sensitivity are affected by the connectivity structure in recurrently connected spiking networks. We found that anti-correlation between the number of afferent (in-degree) and efferent (out-degree) synaptic connections of neurons increases stability against pathological bursting, relative to networks where the degrees were either positively correlated or uncorrelated. In the stable network state, stimulation of a few cells could lead to a detectable change in the firing rate. To quantify the ability of networks to detect the stimulation, we used a receiver operating characteristic (ROC) analysis. For a given level of background noise, networks with anti-correlated degrees displayed the lowest false positive rates, and consequently had the highest stimulus detection performance. We propose that anti-correlation in the degree distribution may be a computational strategy employed by sensory cortices to increase the detectability of external stimuli. We show that networks with anti-correlated degrees can in principle be formed by applying learning rules comprised of a combination of spike-timing dependent plasticity, homeostatic plasticity and pruning to networks with uncorrelated degrees. To test our prediction we suggest a novel experimental method to estimate correlations in the degree distribution.

  11. FPGA IMPLEMENTATION OF ADAPTIVE INTEGRATED SPIKING NEURAL NETWORK FOR EFFICIENT IMAGE RECOGNITION SYSTEM

    Directory of Open Access Journals (Sweden)

    T. Pasupathi

    2014-05-01

    Full Text Available Image recognition is a technology which can be used in various applications such as medical image recognition systems, security, defense video tracking, and factory automation. In this paper we present a novel pipelined architecture of an adaptive integrated Artificial Neural Network for image recognition. In our proposed work we have combined the feature of spiking neuron concept with ANN to achieve the efficient architecture for image recognition. The set of training images are trained by ANN and target output has been identified. Real time videos are captured and then converted into frames for testing purpose and the image were recognized. The machine can operate at up to 40 frames/sec using images acquired from the camera. The system has been implemented on XC3S400 SPARTAN-3 Field Programmable Gate Arrays.

  12. Spiking neural network model for memorizing sequences with forward and backward recall.

    Science.gov (United States)

    Borisyuk, Roman; Chik, David; Kazanovich, Yakov; da Silva Gomes, João

    2013-06-01

    We present an oscillatory network of conductance based spiking neurons of Hodgkin-Huxley type as a model of memory storage and retrieval of sequences of events (or objects). The model is inspired by psychological and neurobiological evidence on sequential memories. The building block of the model is an oscillatory module which contains excitatory and inhibitory neurons with all-to-all connections. The connection architecture comprises two layers. A lower layer represents consecutive events during their storage and recall. This layer is composed of oscillatory modules. Plastic excitatory connections between the modules are implemented using an STDP type learning rule for sequential storage. Excitatory neurons in the upper layer project star-like modifiable connections toward the excitatory lower layer neurons. These neurons in the upper layer are used to tag sequences of events represented in the lower layer. Computer simulations demonstrate good performance of the model including difficult cases when different sequences contain overlapping events. We show that the model with STDP type or anti-STDP type learning rules can be applied for the simulation of forward and backward replay of neural spikes respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Lateral Information Processing by Spiking Neurons: A Theoretical Model of the Neural Correlate of Consciousness

    Directory of Open Access Journals (Sweden)

    Marc Ebner

    2011-01-01

    Full Text Available Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”. Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot” suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of

  14. Successful Reconstruction of a Physiological Circuit with Known Connectivity from Spiking Activity Alone

    Science.gov (United States)

    Gerhard, Felipe; Kispersky, Tilman; Gutierrez, Gabrielle J.; Marder, Eve; Kramer, Mark; Eden, Uri

    2013-01-01

    Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities. PMID:23874181

  15. Successful reconstruction of a physiological circuit with known connectivity from spiking activity alone.

    Directory of Open Access Journals (Sweden)

    Felipe Gerhard

    Full Text Available Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities.

  16. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks

    Directory of Open Access Journals (Sweden)

    Runchun Mark Wang

    2015-05-01

    Full Text Available We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP and Spike Timing Dependent Delay Plasticity (STDDP. We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 2^26 (64M synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted and/or delayed pre-synaptic spike to the target synapse in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 2^36 (64G synaptic adaptors on a current high-end FPGA platform.

  17. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks.

    Science.gov (United States)

    Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan C; van Schaik, André

    2015-01-01

    We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 2(26) (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 2(36) (64G) synaptic adaptors on a current high-end FPGA platform.

  18. A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine

    Directory of Open Access Journals (Sweden)

    Basabdatta Sen-Bhattacharya

    2017-08-01

    Full Text Available We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a “basic building block” for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP—brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG. Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10–50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi

  19. Predictive features of persistent activity emergence in regular spiking and intrinsic bursting model neurons.

    Directory of Open Access Journals (Sweden)

    Kyriaki Sidiropoulou

    Full Text Available Proper functioning of working memory involves the expression of stimulus-selective persistent activity in pyramidal neurons of the prefrontal cortex (PFC, which refers to neural activity that persists for seconds beyond the end of the stimulus. The mechanisms which PFC pyramidal neurons use to discriminate between preferred vs. neutral inputs at the cellular level are largely unknown. Moreover, the presence of pyramidal cell subtypes with different firing patterns, such as regular spiking and intrinsic bursting, raises the question as to what their distinct role might be in persistent firing in the PFC. Here, we use a compartmental modeling approach to search for discriminatory features in the properties of incoming stimuli to a PFC pyramidal neuron and/or its response that signal which of these stimuli will result in persistent activity emergence. Furthermore, we use our modeling approach to study cell-type specific differences in persistent activity properties, via implementing a regular spiking (RS and an intrinsic bursting (IB model neuron. We identify synaptic location within the basal dendrites as a feature of stimulus selectivity. Specifically, persistent activity-inducing stimuli consist of activated synapses that are located more distally from the soma compared to non-inducing stimuli, in both model cells. In addition, the action potential (AP latency and the first few inter-spike-intervals of the neuronal response can be used to reliably detect inducing vs. non-inducing inputs, suggesting a potential mechanism by which downstream neurons can rapidly decode the upcoming emergence of persistent activity. While the two model neurons did not differ in the coding features of persistent activity emergence, the properties of persistent activity, such as the firing pattern and the duration of temporally-restricted persistent activity were distinct. Collectively, our results pinpoint to specific features of the neuronal response to a given

  20. Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems

    Science.gov (United States)

    Fonseca Guerra, Gabriel A.; Furber, Steve B.

    2017-01-01

    Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart.

  1. Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems

    Directory of Open Access Journals (Sweden)

    Gabriel A. Fonseca Guerra

    2017-12-01

    Full Text Available Constraint satisfaction problems (CSP are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems. The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs, and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart.

  2. Bayesian filtering in spiking neural networks: noise, adaptation, and multisensory integration.

    Science.gov (United States)

    Bobrowski, Omer; Meir, Ron; Eldar, Yonina C

    2009-05-01

    A key requirement facing organisms acting in uncertain dynamic environments is the real-time estimation and prediction of environmental states, based on which effective actions can be selected. While it is becoming evident that organisms employ exact or approximate Bayesian statistical calculations for these purposes, it is far less clear how these putative computations are implemented by neural networks in a strictly dynamic setting. In this work, we make use of rigorous mathematical results from the theory of continuous time point process filtering and show how optimal real-time state estimation and prediction may be implemented in a general setting using simple recurrent neural networks. The framework is applicable to many situations of common interest, including noisy observations, non-Poisson spike trains (incorporating adaptation), multisensory integration, and state prediction. The optimal network properties are shown to relate to the statistical structure of the environment, and the benefits of adaptation are studied and explicitly demonstrated. Finally, we recover several existing results as appropriate limits of our general setting.

  3. Idle state classification using spiking activity and local field potentials in a brain computer interface.

    Science.gov (United States)

    Williams, Jordan J; Tien, Rex N; Inoue, Yoh; Schwartz, Andrew B

    2016-08-01

    Previous studies of intracortical brain-computer interfaces (BCIs) have often focused on or compared the use of spiking activity and local field potentials (LFPs) for decoding kinematic movement parameters. Conversely, using these signals to detect the initial intention to use a neuroprosthetic device or not has remained a relatively understudied problem. In this study, we examined the relative performance of spiking activity and LFP signals in detecting discrete state changes in attention regarding a user's desire to actively control a BCI device. Preliminary offline results suggest that the beta and high gamma frequency bands of LFP activity demonstrated a capacity for discriminating idle/active BCI control states equal to or greater than firing rate activity on the same channel. Population classifier models using either signal modality demonstrated an indistinguishably high degree of accuracy in decoding rest periods from active BCI reach periods as well as other portions of active BCI task trials. These results suggest that either signal modality may be used to reliably detect discrete state changes on a fine time scale for the purpose of gating neural prosthetic movements.

  4. High-density microelectrode array recordings and real-time spike sorting for closed-loop experiments: an emerging technology to study neural plasticity.

    Science.gov (United States)

    Franke, Felix; Jäckel, David; Dragas, Jelena; Müller, Jan; Radivojevic, Milos; Bakkum, Douglas; Hierlemann, Andreas

    2012-01-01

    Understanding plasticity of neural networks is a key to comprehending their development and function. A powerful technique to study neural plasticity includes recording and control of pre- and post-synaptic neural activity, e.g., by using simultaneous intracellular recording and stimulation of several neurons. Intracellular recording is, however, a demanding technique and has its limitations in that only a small number of neurons can be stimulated and recorded from at the same time. Extracellular techniques offer the possibility to simultaneously record from larger numbers of neurons with relative ease, at the expenses of increased efforts to sort out single neuronal activities from the recorded mixture, which is a time consuming and error prone step, referred to as spike sorting. In this mini-review, we describe recent technological developments in two separate fields, namely CMOS-based high-density microelectrode arrays, which also allow for extracellular stimulation of neurons, and real-time spike sorting. We argue that these techniques, when combined, will provide a powerful tool to study plasticity in neural networks consisting of several thousand neurons in vitro.

  5. A 0.7 V, 40 nW Compact, Current-Mode Neural Spike Detector in 65 nm CMOS.

    Science.gov (United States)

    Yao, Enyi; Chen, Yi; Basu, Arindam

    2016-04-01

    In this paper, we describe a novel low power, compact, current-mode spike detector circuit for real-time neural recording systems where neural spikes or action potentials (AP) are of interest. Such a circuit can enable massive compression of data facilitating wireless transmission. This design can generate a high signal-to-noise ratio (SNR) output by approximating the popularly used nonlinear energy operator (NEO) through standard analog blocks. We show that a low pass filter after the NEO can be used for two functions-(i) estimate and cancel low frequency interference and (ii) estimate threshold for spike detection. The circuit is implemented in a 65 nm CMOS process and occupies 200 μm × 150 μ m of chip area. Operating from a 0.7 V power supply, it consumes about 30 nW of static power and 7 nW of dynamic power for 100 Hz input spike rate making it the lowest power consuming spike detector reported so far.

  6. Design of silicon brains in the nano-CMOS era: spiking neurons, learning synapses and neural architecture optimization.

    Science.gov (United States)

    Cassidy, Andrew S; Georgiou, Julius; Andreou, Andreas G

    2013-09-01

    We present a design framework for neuromorphic architectures in the nano-CMOS era. Our approach to the design of spiking neurons and STDP learning circuits relies on parallel computational structures where neurons are abstracted as digital arithmetic logic units and communication processors. Using this approach, we have developed arrays of silicon neurons that scale to millions of neurons in a single state-of-the-art Field Programmable Gate Array (FPGA). We demonstrate the validity of the design methodology through the implementation of cortical development in a circuit of spiking neurons, STDP synapses, and neural architecture optimization. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Learning expectation in insects: a recurrent spiking neural model for spatio-temporal representation.

    Science.gov (United States)

    Arena, Paolo; Patané, Luca; Termini, Pietro Savio

    2012-08-01

    Insects are becoming a reference point in Neuroscience for the study of biological aspects at the basis of cognitive processes. These animals have much simpler brains with respect to higher animals, showing, at the same time, impressive capability to adaptively react and take decisions in front of complex environmental situations. In this paper we propose a neural model inspired by the insect olfactory system, with particular attention to the fruit fly Drosophila melanogaster. This architecture is a multilayer spiking network, where each layer is inspired by the structures of the insect brain mainly involved in olfactory information processing, namely the Mushroom Bodies, the Lateral Horns and the Antennal Lobes. In the Antennal Lobes layer olfactory signals lead to a competition among sets of neurons, resulting in a pattern which is projected to the Mushroom Bodies layer. Here a competitive reaction-diffusion process leads to a spontaneous emerging of clusters. The Lateral Horns have been modeled as a delayed input-triggered resetting system. Using plastic recurrent connections, with the addition of simple learning mechanisms, the structure is able to realize a top-down modulation at the input level. This leads to the emergence of an attentional loop as well as to the arousal of basic expectation behaviors in case of subsequently presented stimuli. Simulation results and analysis on the biological plausibility of the architecture are provided and the role of noise in the network is reported. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Self-organization of spiking neural network that generates autonomous behavior in a real mobile robot.

    Science.gov (United States)

    Alnajjar, Fady; Murase, Kazuyuki

    2006-08-01

    In this paper, we propose self-organization algorithm of spiking neural network (SNN) applicable to autonomous robot for generation of adoptive and goal-directed behavior. First, we formulated a SNN model whose inputs and outputs were analog and the hidden unites are interconnected each other. Next, we implemented it into a miniature mobile robot Khepera. In order to see whether or not a solution(s) for the given task(s) exists with the SNN, the robot was evolved with the genetic algorithm in the environment. The robot acquired the obstacle avoidance and navigation task successfully, exhibiting the presence of the solution. After that, a self-organization algorithm based on a use-dependent synaptic potentiation and depotentiation at synapses of input layer to hidden layer and of hidden layer to output layer was formulated and implemented into the robot. In the environment, the robot incrementally organized the network and the given tasks were successfully performed. The time needed to acquire the desired adoptive and goal-directed behavior using the proposed self-organization method was much less than that with the genetic evolution, approximately one fifth.

  9. Acceleration of spiking neural network based pattern recognition on NVIDIA graphics processors.

    Science.gov (United States)

    Han, Bing; Taha, Tarek M

    2010-04-01

    There is currently a strong push in the research community to develop biological scale implementations of neuron based vision models. Systems at this scale are computationally demanding and generally utilize more accurate neuron models, such as the Izhikevich and the Hodgkin-Huxley models, in favor of the more popular integrate and fire model. We examine the feasibility of using graphics processing units (GPUs) to accelerate a spiking neural network based character recognition network to enable such large scale systems. Two versions of the network utilizing the Izhikevich and Hodgkin-Huxley models are implemented. Three NVIDIA general-purpose (GP) GPU platforms are examined, including the GeForce 9800 GX2, the Tesla C1060, and the Tesla S1070. Our results show that the GPGPUs can provide significant speedup over conventional processors. In particular, the fastest GPGPU utilized, the Tesla S1070, provided a speedup of 5.6 and 84.4 over highly optimized implementations on the fastest central processing unit (CPU) tested, a quadcore 2.67 GHz Xeon processor, for the Izhikevich and the Hodgkin-Huxley models, respectively. The CPU implementation utilized all four cores and the vector data parallelism offered by the processor. The results indicate that GPUs are well suited for this application domain.

  10. Event management for large scale event-driven digital hardware spiking neural networks.

    Science.gov (United States)

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Population activity statistics dissect subthreshold and spiking variability in V1.

    Science.gov (United States)

    Bányai, Mihály; Koman, Zsombor; Orbán, Gergő

    2017-07-01

    Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations.NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of

  12. A Reconfigurable and Biologically Inspired Paradigm for Computation Using Network-On-Chip and Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Jim Harkin

    2009-01-01

    Full Text Available FPGA devices have emerged as a popular platform for the rapid prototyping of biological Spiking Neural Networks (SNNs applications, offering the key requirement of reconfigurability. However, FPGAs do not efficiently realise the biologically plausible neuron and synaptic models of SNNs, and current FPGA routing structures cannot accommodate the high levels of interneuron connectivity inherent in complex SNNs. This paper highlights and discusses the current challenges of implementing scalable SNNs on reconfigurable FPGAs. The paper proposes a novel field programmable neural network architecture (EMBRACE, incorporating low-power analogue spiking neurons, interconnected using a Network-on-Chip architecture. Results on the evaluation of the EMBRACE architecture using the XOR benchmark problem are presented, and the performance of the architecture is discussed. The paper also discusses the adaptability of the EMBRACE architecture in supporting fault tolerant computing.

  13. Comparison of a spiking neural network and an MLP for robust identification of generator dynamics in a multimachine power system.

    Science.gov (United States)

    Johnson, Cameron; Venayagamoorthy, Ganesh Kumar; Mitra, Pinaki

    2009-01-01

    The application of a spiking neural network (SNN) and a multi-layer perceptron (MLP) for online identification of generator dynamics in a multimachine power system are compared in this paper. An integrate-and-fire model of an SNN which communicates information via the inter-spike interval is applied. The neural network identifiers are used to predict the speed and terminal voltage deviations one time-step ahead of generators in a multimachine power system. The SNN is developed in two steps: (i) neuron centers determined by offline k-means clustering and (ii) output weights obtained by online training. The sensitivity of the SNN to the neuron centers determined in the first step is evaluated on generators of different ratings and parameters. Performances of the SNN and MLP are compared to evaluate robustness on the identification of generator dynamics under small and large disturbances, and to illustrate that SNNs are capable of learning nonlinear dynamics of complex systems.

  14. A 128-channel 6 mW wireless neural recording IC with spike feature extraction and UWB transmitter.

    Science.gov (United States)

    Chae, Moo Sung; Yang, Zhi; Yuce, Mehmet R; Hoang, Linh; Liu, Wentai

    2009-08-01

    This paper reports a 128-channel neural recording integrated circuit (IC) with on-the-fly spike feature extraction and wireless telemetry. The chip consists of eight 16-channel front-end recording blocks, spike detection and feature extraction digital signal processor (DSP), ultra wideband (UWB) transmitter, and on-chip bias generators. Each recording channel has amplifiers with programmable gain and bandwidth to accommodate different types of biological signals. An analog-to-digital converter (ADC) shared by 16 amplifiers through time-multiplexing results in a balanced trade-off between the power consumption and chip area. A nonlinear energy operator (NEO) based spike detector is implemented for identifying spikes, which are further processed by a digital frequency-shaping filter. The computationally efficient spike detection and feature extraction algorithms attribute to an auspicious DSP implementation on-chip. UWB telemetry is designed to wirelessly transfer raw data from 128 recording channels at a data rate of 90 Mbit/s. The chip is realized in 0.35 mum complementary metal-oxide-semiconductor (CMOS) process with an area of 8.8 x 7.2 mm(2) and consumes 6 mW by employing a sequential turn-on architecture that selectively powers off idle analog circuit blocks. The chip has been tested for electrical specifications and verified in an ex vivo biological environment.

  15. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks.

    Science.gov (United States)

    Zenke, Friedemann; Agnes, Everton J; Gerstner, Wulfram

    2015-04-21

    Synaptic plasticity, the putative basis of learning and memory formation, manifests in various forms and across different timescales. Here we show that the interaction of Hebbian homosynaptic plasticity with rapid non-Hebbian heterosynaptic plasticity is, when complemented with slower homeostatic changes and consolidation, sufficient for assembly formation and memory recall in a spiking recurrent network model of excitatory and inhibitory neurons. In the model, assemblies were formed during repeated sensory stimulation and characterized by strong recurrent excitatory connections. Even days after formation, and despite ongoing network activity and synaptic plasticity, memories could be recalled through selective delay activity following the brief stimulation of a subset of assembly neurons. Blocking any component of plasticity prevented stable functioning as a memory network. Our modelling results suggest that the diversity of plasticity phenomena in the brain is orchestrated towards achieving common functional goals.

  16. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks

    Science.gov (United States)

    Zenke, Friedemann; Agnes, Everton J.; Gerstner, Wulfram

    2015-01-01

    Synaptic plasticity, the putative basis of learning and memory formation, manifests in various forms and across different timescales. Here we show that the interaction of Hebbian homosynaptic plasticity with rapid non-Hebbian heterosynaptic plasticity is, when complemented with slower homeostatic changes and consolidation, sufficient for assembly formation and memory recall in a spiking recurrent network model of excitatory and inhibitory neurons. In the model, assemblies were formed during repeated sensory stimulation and characterized by strong recurrent excitatory connections. Even days after formation, and despite ongoing network activity and synaptic plasticity, memories could be recalled through selective delay activity following the brief stimulation of a subset of assembly neurons. Blocking any component of plasticity prevented stable functioning as a memory network. Our modelling results suggest that the diversity of plasticity phenomena in the brain is orchestrated towards achieving common functional goals. PMID:25897632

  17. Basal forebrain activation enhances between-trial reliability of low-frequency local field potentials (LFP) and spiking activity in tree shrew primary visual cortex (V1).

    Science.gov (United States)

    De Luna, Paolo; Veit, Julia; Rainer, Gregor

    2017-12-01

    Brain state has profound effects on neural processing and stimulus encoding in sensory cortices. While the synchronized state is dominated by low-frequency local field potential (LFP) activity, low-frequency LFP power is suppressed in the desynchronized state, where a concurrent enhancement in gamma power is observed. Recently, it has been shown that cortical desynchronization co-occurs with enhanced between-trial reliability of spiking activity in sensory neurons, but it is currently unclear whether this effect is also evident in LFP signals. Here, we address this question by recording both spike trains and LFP in primary visual cortex during natural movie stimulation, and using isoflurane anesthesia and basal forebrain (BF) electrical activation as proxies for synchronized and desynchronized brain states. We show that indeed, low-frequency LFP modulations ("LFP events") also occur more reliably following BF activation. Interestingly, while being more reliable, these LFP events are smaller in amplitude compared to those generated in the synchronized brain state. We further demonstrate that differences in reliability of spiking activity between cortical states can be linked to amplitude and probability of LFP events. The correlated temporal dynamics between low-frequency LFP and spiking response reliability in visual cortex suggests that these effects may both be the result of the same neural circuit activation triggered by BF stimulation, which facilitates switching between processing of incoming sensory information in the desynchronized and reverberation of internal signals in the synchronized state.

  18. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    Science.gov (United States)

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors.

    Science.gov (United States)

    Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L; Nicolau, Alex; Veidenbaum, Alexander V

    2009-01-01

    Neural network simulators that take into account the spiking behavior of neurons are useful for studying brain mechanisms and for various neural engineering applications. Spiking Neural Network (SNN) simulators have been traditionally simulated on large-scale clusters, super-computers, or on dedicated hardware architectures. Alternatively, Compute Unified Device Architecture (CUDA) Graphics Processing Units (GPUs) can provide a low-cost, programmable, and high-performance computing platform for simulation of SNNs. In this paper we demonstrate an efficient, biologically realistic, large-scale SNN simulator that runs on a single GPU. The SNN model includes Izhikevich spiking neurons, detailed models of synaptic plasticity and variable axonal delay. We allow user-defined configuration of the GPU-SNN model by means of a high-level programming interface written in C++ but similar to the PyNN programming interface specification. PyNN is a common programming interface developed by the neuronal simulation community to allow a single script to run on various simulators. The GPU implementation (on NVIDIA GTX-280 with 1 GB of memory) is up to 26 times faster than a CPU version for the simulation of 100K neurons with 50 Million synaptic connections, firing at an average rate of 7 Hz. For simulation of 10 Million synaptic connections and 100K neurons, the GPU SNN model is only 1.5 times slower than real-time. Further, we present a collection of new techniques related to parallelism extraction, mapping of irregular communication, and network representation for effective simulation of SNNs on GPUs. The fidelity of the simulation results was validated on CPU simulations using firing rate, synaptic weight distribution, and inter-spike interval analysis. Our simulator is publicly available to the modeling community so that researchers will have easy access to large-scale SNN simulations.

  20. A 128-Channel FPGA-Based Real-Time Spike-Sorting Bidirectional Closed-Loop Neural Interface System.

    Science.gov (United States)

    Park, Jongkil; Kim, Gookhwa; Jung, Sang-Don

    2017-12-01

    A multichannel neural interface system is an important tool for various types of neuroscientific studies. For the electrical interface with a biological system, high-precision high-speed data recording and various types of stimulation capability are required. In addition, real-time signal processing is an important feature in the implementation of a real-time closed-loop system without unwanted substantial delay for feedback stimulation. Online spike sorting, the process of assigning neural spikes to an identified group of neurons or clusters, is a necessary step to make a closed-loop path in real time, but massive memory-space requirements commonly limit hardware implementations. Here, we present a 128-channel field-programmable gate array (FPGA)-based real-time closed-loop bidirectional neural interface system. The system supports 128 channels for simultaneous signal recording and eight selectable channels for stimulation. A modular 64-channel analog front-end (AFE) provides scalability and a parameterized specification of the AFE supports the recording of various electrophysiological signal types with 1.59 ± 0.76 root-mean-square noise. The stimulator supports both voltage-controlled and current-controlled arbitrarily shaped waveforms with the programmable amplitude and duration of pulse. An empirical algorithm for online real-time spike sorting is implemented in an FPGA. The spike-sorting is performed by template matching, and templates are created by an online real-time unsupervised learning process. A memory saving technique, called dynamic cache organizing, is proposed to reduce the memory requirement down to 6 kbit per channel and modular implementation improves the scalability for further extensions.

  1. Remifentanil-induced spike activity as a diagnostic tool in epilepsy surgery

    DEFF Research Database (Denmark)

    Grønlykke, L; Knudsen, M L; Høgenhaven, H

    2008-01-01

    To assess the value of remifentanil in intraoperative evaluation of spike activity in patients undergoing surgery for mesial temporal lobe epilepsy (MTLE).......To assess the value of remifentanil in intraoperative evaluation of spike activity in patients undergoing surgery for mesial temporal lobe epilepsy (MTLE)....

  2. Accuracy evaluation of numerical methods used in state-of-the-art simulators for spiking neural networks.

    Science.gov (United States)

    Henker, Stephan; Partzsch, Johannes; Schüffny, René

    2012-04-01

    With the various simulators for spiking neural networks developed in recent years, a variety of numerical solution methods for the underlying differential equations are available. In this article, we introduce an approach to systematically assess the accuracy of these methods. In contrast to previous investigations, our approach focuses on a completely deterministic comparison and uses an analytically solved model as a reference. This enables the identification of typical sources of numerical inaccuracies in state-of-the-art simulation methods. In particular, with our approach we can separate the error of the numerical integration from the timing error of spike detection and propagation, the latter being prominent in simulations with fixed timestep. To verify the correctness of the testing procedure, we relate the numerical deviations to theoretical predictions for the employed numerical methods. Finally, we give an example of the influence of simulation artefacts on network behaviour and spike-timing-dependent plasticity (STDP), underlining the importance of spike-time accuracy for the simulation of STDP.

  3. GABAergic activities control spike timing- and frequency-dependent long-term depression at hippocampal excitatory synapses

    Directory of Open Access Journals (Sweden)

    Makoto Nishiyama

    2010-06-01

    Full Text Available GABAergic interneuronal network activities in the hippocampus control a variety of neural functions, including learning and memory, by regulating θ and γ oscillations. How these GABAergic activities at pre- and post-synaptic sites of hippocampal CA1 pyramidal cells differentially contribute to synaptic function and plasticity during their repetitive pre- and post-synaptic spiking at θ and γ oscillations is largely unknown. We show here that activities mediated by postsynaptic GABAARs and presynaptic GABABRs determine, respectively, the spike timing- and frequency-dependence of activity-induced synaptic modifications at Schaffer collateral-CA1 excitatory synapses. We demonstrate that both feedforward and feedback GABAAR-mediated inhibition in the postsynaptic cell controls the spike timing-dependent long-term depression of excitatory inputs (“e-LTD” at the θ frequency. We also show that feedback postsynaptic inhibition specifically causes e-LTD of inputs that induce small postsynaptic currents (<70 pA with LTP timing, thus enforcing the requirement of cooperativity for induction of long-term potentiation at excitatory inputs (“e-LTP”. Furthermore, under spike-timing protocols that induce e-LTP and e-LTD at excitatory synapses, we observed parallel induction of LTP and LTD at inhibitory inputs (“i-LTP” and “i-LTD” to the same postsynaptic cells. Finally, we show that presynaptic GABABR-mediated inhibition plays a major role in the induction of frequency-dependent e-LTD at α and β frequencies. These observations demonstrate the critical influence of GABAergic interneuronal network activities in regulating the spike timing and frequency dependences of long-term synaptic modifications in the hippocampus.

  4. Dendritic spikes are enhanced by cooperative network activity in the intact hippocampus.

    Science.gov (United States)

    Kamondi, A; Acsády, L; Buzsáki, G

    1998-05-15

    In vitro experiments suggest that dendritic fast action potentials may influence the efficacy of concurrently active synapses by enhancing Ca2+ influx into the dendrites. However, the exact circumstances leading to these effects in the intact brain are not known. We have addressed these issues by performing intracellular sharp electrode recordings from morphologically identified sites in the apical dendrites of CA1 pyramidal neurons in vivo while simultaneously monitoring extracellular population activity. The amplitude of spontaneous fast action potentials in dendrites decreased as a function of distance from the soma, suggesting that dendritic propagation of fast action potentials is strongly attenuated in vivo. Whereas the amplitude variability of somatic action potentials was very small, the amplitude of fast spikes varied substantially in distal dendrites. Large-amplitude fast spikes in dendrites occurred during population discharges of CA3-CA1 neurons concurrent with field sharp waves. The large-amplitude fast spikes were associated with bursts of smaller-amplitude action potentials and putative Ca2+ spikes. Both current pulse-evoked and spontaneously occurring Ca2+ spikes were always preceded by large-amplitude fast spikes. More spikes were observed in the dendrites during sharp waves than in the soma, suggesting that local dendritic spikes may be generated during this behaviorally relevant population pattern. Because not all dendritic spikes produce somatic action potentials, they may be functionally distinct from action potentials that signal via the axon.

  5. Recording Spikes Activity in Cultured Hippocampal Neurons Using Flexible or Transparent Graphene Transistors

    Directory of Open Access Journals (Sweden)

    Farida Veliev

    2017-08-01

    Full Text Available The emergence of nanoelectronics applied to neural interfaces has started few decades ago, and aims to provide new tools for replacing or restoring disabled functions of the nervous systems as well as further understanding the evolution of such complex organization. As the same time, graphene and other 2D materials have offered new possibilities for integrating micro and nano-devices on flexible, transparent, and biocompatible substrates, promising for bio and neuro-electronics. In addition to many bio-suitable features of graphene interface, such as, chemical inertness and anti-corrosive properties, its optical transparency enables multimodal approach of neuronal based systems, the electrical layer being compatible with additional microfluidics and optical manipulation ports. The convergence of these fields will provide a next generation of neural interfaces for the reliable detection of single spike and record with high fidelity activity patterns of neural networks. Here, we report on the fabrication of graphene field effect transistors (G-FETs on various substrates (silicon, sapphire, glass coverslips, and polyimide deposited onto Si/SiO2 substrates, exhibiting high sensitivity (4 mS/V, close to the Dirac point at VLG < VD and low noise level (10−22 A2/Hz, at VLG = 0 V. We demonstrate the in vitro detection of the spontaneous activity of hippocampal neurons in-situ-grown on top of the graphene sensors during several weeks in a millimeter size PDMS fluidics chamber (8 mm wide. These results provide an advance toward the realization of biocompatible devices for reliable and high spatio-temporal sensing of neuronal activity for both in vitro and in vivo applications.

  6. Exact subthreshold integration with continuous spike times in discrete-time neural network simulations.

    Science.gov (United States)

    Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus

    2007-01-01

    Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.

  7. Force sensor in simulated skin and neural model mimic tactile SAI afferent spiking response to ramp and hold stimuli

    Directory of Open Access Journals (Sweden)

    Kim Elmer K

    2012-07-01

    Full Text Available Abstract Background The next generation of prosthetic limbs will restore sensory feedback to the nervous system by mimicking how skin mechanoreceptors, innervated by afferents, produce trains of action potentials in response to compressive stimuli. Prior work has addressed building sensors within skin substitutes for robotics, modeling skin mechanics and neural dynamics of mechanotransduction, and predicting response timing of action potentials for vibration. The effort here is unique because it accounts for skin elasticity by measuring force within simulated skin, utilizes few free model parameters for parsimony, and separates parameter fitting and model validation. Additionally, the ramp-and-hold, sustained stimuli used in this work capture the essential features of the everyday task of contacting and holding an object. Methods This systems integration effort computationally replicates the neural firing behavior for a slowly adapting type I (SAI afferent in its temporally varying response to both intensity and rate of indentation force by combining a physical force sensor, housed in a skin-like substrate, with a mathematical model of neuronal spiking, the leaky integrate-and-fire. Comparison experiments were then conducted using ramp-and-hold stimuli on both the spiking-sensor model and mouse SAI afferents. The model parameters were iteratively fit against recorded SAI interspike intervals (ISI before validating the model to assess its performance. Results Model-predicted spike firing compares favorably with that observed for single SAI afferents. As indentation magnitude increases (1.2, 1.3, to 1.4 mm, mean ISI decreases from 98.81 ± 24.73, 54.52 ± 6.94, to 41.11 ± 6.11 ms. Moreover, as rate of ramp-up increases, ISI during ramp-up decreases from 21.85 ± 5.33, 19.98 ± 3.10, to 15.42 ± 2.41 ms. Considering first spikes, the predicted latencies exhibited a decreasing trend as stimulus rate increased, as is

  8. Ultradian rhythms in spike-wave activity in eegs of wag/rij rats

    NARCIS (Netherlands)

    Midzyanovskaya, I.S.; Strelkov, V.V.; Kuznetsova, G.D.; Luijtelaar, E.L.J.M. van; Luijtelaar, E.L.J.M. van; Kuznetsova, G.D.; Coenen, A.M.L.; Chepurnov, S.A.

    2004-01-01

    Long-term EEG studies in WAG/Rij rats revealed the existence of ultradian rhythms in the daily pattern of appearance of spike-wave discharges (SWDs), the electrophysiological hallmark of absence epilepsy. A polymodal pattern of regulation was found for the index of spike-wave activity (iSWD , amount

  9. Sound Source Localization Through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network

    Directory of Open Access Journals (Sweden)

    Christoph Beck

    2016-10-01

    Full Text Available Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  10. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks.

    Science.gov (United States)

    Naveros, Francisco; Garrido, Jesus A; Carrillo, Richard R; Ros, Eduardo; Luque, Niceto R

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under

  11. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks

    Science.gov (United States)

    Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under

  12. A compound memristive synapse model for statistical learning through STDP in spiking neural networks

    Directory of Open Access Journals (Sweden)

    Johannes eBill

    2014-12-01

    Full Text Available Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network’s spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic

  13. The effects of sevoflurane and hyperventilation on electrocorticogram spike activity in patients with refractory epilepsy.

    Science.gov (United States)

    Kurita, Naoko; Kawaguchi, Masahiko; Hoshida, Tohru; Nakase, Hiroyuki; Sakaki, Toshisuke; Furuya, Hitoshi

    2005-08-01

    We investigated the effects of sevoflurane and hyperventilation on intraoperative electrocorticogram (ECoG) spike activity in 13 patients with intractable epilepsy. Grid electrodes were placed on the brain surface and ECoG was recorded under the following conditions: 1) 0.5 minimal alveolar anesthetic concentration (MAC) sevoflurane, 2) 1.5 MAC sevoflurane, and 3) 1.5 MAC sevoflurane with hyperventilation. The number of spikes per 5 min and the percentage of leads with spikes were assessed in each condition. In 4 patients with chronically implanted-subdural electrodes, the leads with seizure onset and with spikes during the interictal periods in the awake state were compared with those during sevoflurane anesthesia at 0.5 MAC and 1.5 MAC. The number of spikes and the percentage of leads with spikes were significantly more under 1.5 MAC sevoflurane anesthesia compared with those under 0.5 MAC sevoflurane (P hyperventilation significantly increased the number of spikes and percentage of leads with spikes (P hyperventilation can affect the frequency and extent of ECoG spike activity in patients with intractable epilepsy. Careful attention should be paid to the concentration of sevoflurane used and ventilatory status when intraoperative EcoG is used to localize epileptic lesions. Electrocorticogram can be used to define the location and extent of epileptic foci during epilepsy surgery. However, electrocorticogram can be affected by anesthetic technique. The present study found that sevoflurane concentration and hyperventilation affected the frequency and the extent of electrocorticogram spike activity in epileptic patients.

  14. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    Science.gov (United States)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  15. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    Science.gov (United States)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  16. Spiking Neural Classifier with Lumped Dendritic Nonlinearity and Binary Synapses: A Current Mode VLSI Implementation and Analysis.

    Science.gov (United States)

    Bhaduri, Aritra; Banerjee, Amitava; Roy, Subhrajit; Kar, Sougata; Basu, Arindam

    2017-12-08

    We present a neuromorphic current mode implementation of a spiking neural classifier with lumped square law dendritic nonlinearity. It has been shown previously in software simulations that such a system with binary synapses can be trained with structural plasticity algorithms to achieve comparable classification accuracy with fewer synaptic resources than conventional algorithms. We show that even in real analog systems with manufacturing imperfections (CV of 23.5% and 14.4% for dendritic branch gains and leaks respectively), this network is able to produce comparable results with fewer synaptic resources. The chip fabricated in [Formula: see text]m complementary metal oxide semiconductor has eight dendrites per cell and uses two opposing cells per class to cancel common-mode inputs. The chip can operate down to a [Formula: see text] V and dissipates 19 nW of static power per neuronal cell and [Formula: see text] 125 pJ/spike. For two-class classification problems of high-dimensional rate encoded binary patterns, the hardware achieves comparable performance as software implementation of the same with only about a 0.5% reduction in accuracy. On two UCI data sets, the IC integrated circuit has classification accuracy comparable to standard machine learners like support vector machines and extreme learning machines while using two to five times binary synapses. We also show that the system can operate on mean rate encoded spike patterns, as well as short bursts of spikes. To the best of our knowledge, this is the first attempt in hardware to perform classification exploiting dendritic properties and binary synapses.

  17. A Modified Izhikevich Model For Circuit Implementation of Spiking Neural Networks

    OpenAIRE

    Ahmadi, Arash; Zwolinski, Mark

    2010-01-01

    The Izhikevich neuron model reproduces the spiking and bursting behaviour of certain types of cortical neurons. This model has a second order nonlinearity that makes it difficult to implement in hardware. We propose a simplified version of the model that has a piecewise-linear relationship. This modification simplifies the hardware implementation but demonstrates similar dynamic behaviour.

  18. Neural Code-Neural Self-information Theory on How Cell-Assembly Code Rises from Spike Time and Neuronal Variability.

    Science.gov (United States)

    Li, Meng; Tsien, Joe Z

    2017-01-01

    A major stumbling block to cracking the real-time neural code is neuronal variability - neurons discharge spikes with enormous variability not only across trials within the same experiments but also in resting states. Such variability is widely regarded as a noise which is often deliberately averaged out during data analyses. In contrast to such a dogma, we put forth the Neural Self-Information Theory that neural coding is operated based on the self-information principle under which variability in the time durations of inter-spike-intervals (ISI), or neuronal silence durations, is self-tagged with discrete information. As the self-information processor, each ISI carries a certain amount of information based on its variability-probability distribution; higher-probability ISIs which reflect the balanced excitation-inhibition ground state convey minimal information, whereas lower-probability ISIs which signify rare-occurrence surprisals in the form of extremely transient or prolonged silence carry most information. These variable silence durations are naturally coupled with intracellular biochemical cascades, energy equilibrium and dynamic regulation of protein and gene expression levels. As such, this silence variability-based self-information code is completely intrinsic to the neurons themselves, with no need for outside observers to set any reference point as typically used in the rate code, population code and temporal code models. Moreover, temporally coordinated ISI surprisals across cell population can inherently give rise to robust real-time cell-assembly codes which can be readily sensed by the downstream neural clique assemblies. One immediate utility of this self-information code is a general decoding strategy to uncover a variety of cell-assembly patterns underlying external and internal categorical or continuous variables in an unbiased manner.

  19. Magnetic Tunnel Junction Based Long-Term Short-Term Stochastic Synapse for a Spiking Neural Network with On-Chip STDP Learning.

    Science.gov (United States)

    Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik

    2016-07-13

    Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.

  20. Magnetic Tunnel Junction Based Long-Term Short-Term Stochastic Synapse for a Spiking Neural Network with On-Chip STDP Learning

    Science.gov (United States)

    Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik

    2016-07-01

    Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.

  1. Using strategic movement to calibrate a neural compass: a spiking network for tracking head direction in rats and robots.

    Directory of Open Access Journals (Sweden)

    Peter Stratton

    Full Text Available The head direction (HD system in mammals contains neurons that fire to represent the direction the animal is facing in its environment. The ability of these cells to reliably track head direction even after the removal of external sensory cues implies that the HD system is calibrated to function effectively using just internal (proprioceptive and vestibular inputs. Rat pups and other infant mammals display stereotypical warm-up movements prior to locomotion in novel environments, and similar warm-up movements are seen in adult mammals with certain brain lesion-induced motor impairments. In this study we propose that synaptic learning mechanisms, in conjunction with appropriate movement strategies based on warm-up movements, can calibrate the HD system so that it functions effectively even in darkness. To examine the link between physical embodiment and neural control, and to determine that the system is robust to real-world phenomena, we implemented the synaptic mechanisms in a spiking neural network and tested it on a mobile robot platform. Results show that the combination of the synaptic learning mechanisms and warm-up movements are able to reliably calibrate the HD system so that it accurately tracks real-world head direction, and that calibration breaks down in systematic ways if certain movements are omitted. This work confirms that targeted, embodied behaviour can be used to calibrate neural systems, demonstrates that 'grounding' of modelled biological processes in the real world can reveal underlying functional principles (supporting the importance of robotics to biology, and proposes a functional role for stereotypical behaviours seen in infant mammals and those animals with certain motor deficits. We conjecture that these calibration principles may extend to the calibration of other neural systems involved in motion tracking and the representation of space, such as grid cells in entorhinal cortex.

  2. Stable learning of functional maps in self-organizing spiking neural networks with continuous synaptic plasticity.

    Science.gov (United States)

    Srinivasa, Narayan; Jiang, Qin

    2013-01-01

    This study describes a spiking model that self-organizes for stable formation and maintenance of orientation and ocular dominance maps in the visual cortex (V1). This self-organization process simulates three development phases: an early experience-independent phase, a late experience-independent phase and a subsequent refinement phase during which experience acts to shape the map properties. The ocular dominance maps that emerge accommodate the two sets of monocular inputs that arise from the lateral geniculate nucleus (LGN) to layer 4 of V1. The orientation selectivity maps that emerge feature well-developed iso-orientation domains and fractures. During the last two phases of development the orientation preferences at some locations appear to rotate continuously through ±180° along circular paths and referred to as pinwheel-like patterns but without any corresponding point discontinuities in the orientation gradient maps. The formation of these functional maps is driven by balanced excitatory and inhibitory currents that are established via synaptic plasticity based on spike timing for both excitatory and inhibitory synapses. The stability and maintenance of the formed maps with continuous synaptic plasticity is enabled by homeostasis caused by inhibitory plasticity. However, a prolonged exposure to repeated stimuli does alter the formed maps over time due to plasticity. The results from this study suggest that continuous synaptic plasticity in both excitatory neurons and interneurons could play a critical role in the formation, stability, and maintenance of functional maps in the cortex.

  3. Fractal character of the neural spike train in the visual system of the cat.

    Science.gov (United States)

    Teich, M C; Heneghan, C; Lowen, S B; Ozaki, T; Kaplan, E

    1997-03-01

    We used a variety of statistical measures to identify the point process that describes the maintained discharge of retinal ganglion cells (RGC's) and neurons in the lateral geniculate nucleus (LGN) of the cat. These measures are based on both interevent intervals and event counts and include the interevent-interval histogram, rescaled range analysis, the event-number histogram, the Fano factor, Allan factor, and the periodogram. In addition, we applied these measures to surrogate versions of the data, generated by random shuffling of the order of interevent intervals. The continuing statistics reveal 1/f-type fluctuations in the data (long-duration power-law correlation), which are not present in the shuffled data. Estimates of the fractal exponents measured for RGC- and their target LGN-spike trains are similar in value, indicating that the fractal behavior either is transmitted form one cell to the other or has a common origin. The gamma-r renewal process model, often used in the analysis of visual-neuron interevent intervals, describes certain short-term features of the RGC and LGN data reasonably well but fails to account for the long-duration correlation. We present a new model for visual-system nerve-spike firings: a gamma-r renewal process whose mean is modulated by fractal binomial noise. This fractal, doubly stochastic point process characterizes the statistical behavior of both RGC and LGN data sets remarkably well.

  4. Streaming Parallel GPU Acceleration of Large-Scale filter-based Spiking Neural Networks

    NARCIS (Netherlands)

    L.P. Slazynski (Leszek); S.M. Bohte (Sander)

    2012-01-01

    htmlabstractThe arrival of graphics processing (GPU) cards suitable for massively parallel computing promises a↵ordable large-scale neural network simulation previously only available at supercomputing facil- ities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of

  5. Decorrelation of Neural-Network Activity by Inhibitory Feedback

    Science.gov (United States)

    Einevoll, Gaute T.; Diesmann, Markus

    2012-01-01

    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between

  6. A decision-making model based on a spiking neural circuit and synaptic plasticity.

    Science.gov (United States)

    Wei, Hui; Bu, Yijie; Dai, Dawei

    2017-10-01

    To adapt to the environment and survive, most animals can control their behaviors by making decisions. The process of decision-making and responding according to cues in the environment is stable, sustainable, and learnable. Understanding how behaviors are regulated by neural circuits and the encoding and decoding mechanisms from stimuli to responses are important goals in neuroscience. From results observed in Drosophila experiments, the underlying decision-making process is discussed, and a neural circuit that implements a two-choice decision-making model is proposed to explain and reproduce the observations. Compared with previous two-choice decision making models, our model uses synaptic plasticity to explain changes in decision output given the same environment. Moreover, biological meanings of parameters of our decision-making model are discussed. In this paper, we explain at the micro-level (i.e., neurons and synapses) how observable decision-making behavior at the macro-level is acquired and achieved.

  7. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for evaluation of spike-sorting algorithms.

    Science.gov (United States)

    Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T

    2015-04-30

    New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  8. A Real-Time Spike Classification Method Based on Dynamic Time Warping for Extracellular Enteric Neural Recording with Large Waveform Variability

    Science.gov (United States)

    Cao, Yingqiu; Rakhilin, Nikolai; Gordon, Philip H.; Shen, Xiling; Kan, Edwin C.

    2015-01-01

    Background Computationally efficient spike recognition methods are required for real-time analysis of extracellular neural recordings. The enteric nervous system (ENS) is important to human health but less well-understood with few appropriate spike recognition algorithms due to large waveform variability. New Method Here we present a method based on dynamic time warping (DTW) with high tolerance to variability in time and magnitude. Adaptive temporal gridding for fastDTW in similarity calculation significantly reduces the computational cost. The automated threshold selection allows for real-time classification for extracellular recordings. Results Our method is first evaluated on synthesized data at different noise levels, improving both classification accuracy and computational complexity over the conventional cross-correlation based template-matching method (CCTM) and PCA + k-means clustering without time warping. Our method is then applied to analyze the mouse enteric neural recording with mechanical and chemical stimuli. Successful classification of biphasic and monophasic spikes is achieved even when the spike variability is larger than millisecond in width and millivolt in magnitude. Comparison with Existing Method(s) In comparison with conventional template matching and clustering methods, the fastDTW method is computationally efficient with high tolerance to waveform variability. Conclusions We have developed an adaptive fastDTW algorithm for real-time spike classification of ENS recording with large waveform variability against colony motility, ambient changes and cellular heterogeneity. PMID:26719239

  9. Automatic spike detection via an artificial neural network using raw EEG data: effects of data preparation and implications in the limitations of online recognition.

    Science.gov (United States)

    Ko, C W; Chung, H W

    2000-03-01

    Automatic detection of epileptic EEG spikes via an artificial neural network has been reported to be feasible using raw EEG data as input. This study re-investigated its suitability by further exploring the effects of data preparation on classification performance testing. Six hundred EEG files (300 spikes and 300 non-spikes) taken from 20 patients were included in this study. Raw EEG data were sent to the neural network using the architecture reported to give best performance (30 input-layer and 6 hidden-layer neurons). Significantly larger weighting of the 10th input-layer neuron was found after training with prepared raw EEG data. The classification process was thus dominated by the peak location. Subsequent analysis showed that online spike detection with an erroneously trained network yielded an area less than 0.5 under the receiver-operating-characteristic curve, and hence performed inferiorly to random assignments. Networks trained and tested using the same unprepared EEG data achieved no better than about 87% true classification rate at equal sensitivity and specificity. The high true classification rate reported previously is believed to be an artifact arising from erroneous data preparation and off-line validation. Spike detection using raw EEG data as input is unlikely to be feasible under current computer technology.

  10. Decision making under uncertainty in a spiking neural network model of the basal ganglia.

    Science.gov (United States)

    Héricé, Charlotte; Khalil, Radwa; Moftah, Marie; Boraud, Thomas; Guthrie, Martin; Garenne, André

    2016-12-01

    The mechanisms of decision-making and action selection are generally thought to be under the control of parallel cortico-subcortical loops connecting back to distinct areas of cortex through the basal ganglia and processing motor, cognitive and limbic modalities of decision-making. We have used these properties to develop and extend a connectionist model at a spiking neuron level based on a previous rate model approach. This model is demonstrated on decision-making tasks that have been studied in primates and the electrophysiology interpreted to show that the decision is made in two steps. To model this, we have used two parallel loops, each of which performs decision-making based on interactions between positive and negative feedback pathways. This model is able to perform two-level decision-making as in primates. We show here that, before learning, synaptic noise is sufficient to drive the decision-making process and that, after learning, the decision is based on the choice that has proven most likely to be rewarded. The model is then submitted to lesion tests, reversal learning and extinction protocols. We show that, under these conditions, it behaves in a consistent manner and provides predictions in accordance with observed experimental data.

  11. The dynamic brain: from spiking neurons to neural masses and cortical fields.

    Directory of Open Access Journals (Sweden)

    Gustavo Deco

    2008-08-01

    Full Text Available The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space-time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI, electroencephalogram (EEG, and magnetoencephalogram (MEG. Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the

  12. A two-stage neural spiking model of visual contrast detection in perimetry

    Science.gov (United States)

    SK, Gardiner; WH, Swanson; S, Demirel; AM, McKendrick; A, Turpin; CA, Johnson

    2008-01-01

    Perimetry is a commonly used clinical test for visual function, limited by high variability. The sources of this variability need to be better understood. In this paper, we investigate whether noise intrinsic to neural firing could explain the variability in normal subjects. We present the most physiologically accurate model to date for stimulus detection in perimetry combining knowledge of the physiology of components of the visual system with signal detection theory, and show that it requires that detection be mediated by multiple cortical cells in order to give predictions consistent with psychometric functions measured in human observers. PMID:18602414

  13. Advancing interconnect density for spiking neural network hardware implementations using traffic-aware adaptive network-on-chip routers.

    Science.gov (United States)

    Carrillo, Snaider; Harkin, Jim; McDaid, Liam; Pande, Sandeep; Cawley, Seamus; McGinley, Brian; Morgan, Fearghal

    2012-09-01

    The brain is highly efficient in how it processes information and tolerates faults. Arguably, the basic processing units are neurons and synapses that are interconnected in a complex pattern. Computer scientists and engineers aim to harness this efficiency and build artificial neural systems that can emulate the key information processing principles of the brain. However, existing approaches cannot provide the dense interconnect for the billions of neurons and synapses that are required. Recently a reconfigurable and biologically inspired paradigm based on network-on-chip (NoC) and spiking neural networks (SNNs) has been proposed as a new method of realising an efficient, robust computing platform. However, the use of the NoC as an interconnection fabric for large-scale SNNs demands a good trade-off between scalability, throughput, neuron/synapse ratio and power consumption. This paper presents a novel traffic-aware, adaptive NoC router, which forms part of a proposed embedded mixed-signal SNN architecture called EMBRACE (EMulating Biologically-inspiRed ArChitectures in hardwarE). The proposed adaptive NoC router provides the inter-neuron connectivity for EMBRACE, maintaining router communication and avoiding dropped router packets by adapting to router traffic congestion. Results are presented on throughput, power and area performance analysis of the adaptive router using a 90 nm CMOS technology which outperforms existing NoCs in this domain. The adaptive behaviour of the router is also verified on a Stratix II FPGA implementation of a 4 × 2 router array with real-time traffic congestion. The presented results demonstrate the feasibility of using the proposed adaptive NoC router within the EMBRACE architecture to realise large-scale SNNs on embedded hardware. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Remifentanil-induced spike activity as a diagnostic tool in epilepsy surgery

    DEFF Research Database (Denmark)

    Gronlykke, L.; Knudsen, M.L.; Hogenhaven, H.

    2008-01-01

    OBJECTIVES: To assess the value of remifentanil in intraoperative evaluation of spike activity in patients undergoing surgery for mesial temporal lobe epilepsy (MTLE). MATERIALS AND METHODS: Twenty-five patients undergoing temporal lobectomy for medically intractable MTLE were enrolled in the stu...

  15. fMRI activation during spike and wave discharges evoked by photic stimulation

    DEFF Research Database (Denmark)

    Moeller, Friederike; Siebner, Hartwig R; Ahlgrimm, Nils

    2009-01-01

    (EEG-fMRI) in patients with spontaneous generalised spike-wave discharges (GSW) have revealed activation of the thalamus and deactivation in frontoparietal areas, EEG-fMRI studies on evoked GSW such as PPR are lacking. In this EEG-fMRI study, 30 subjects with reported generalised PPR underwent...

  16. Changes in complex spike activity during classical conditioning

    OpenAIRE

    Rasmussen, Anders; Jirenhed, Dan-Anders; Wetmore, Daniel Z.; Hesslow, Germund

    2014-01-01

    The cerebellar cortex is necessary for adaptively timed conditioned responses (CRs) in eyeblink conditioning. During conditioning, Purkinje cells acquire pause responses or "Purkinje cell CRs" to the conditioned stimuli (CS), resulting in disinhibition of the cerebellar nuclei (CN), allowing them to activate motor nuclei that control eyeblinks. This disinhibition also causes inhibition of the inferior olive (IO), via the nucleo-olivary pathway (N-O). Activation of the IO, which rela...

  17. Real-time radionuclide identification in γ-emitter mixtures based on spiking neural network.

    Science.gov (United States)

    Bobin, C; Bichler, O; Lourenço, V; Thiam, C; Thévenin, M

    2016-03-01

    Portal radiation monitors dedicated to the prevention of illegal traffic of nuclear materials at international borders need to deliver as fast as possible a radionuclide identification of a potential radiological threat. Spectrometry techniques applied to identify the radionuclides contributing to γ-emitter mixtures are usually performed using off-line spectrum analysis. As an alternative to these usual methods, a real-time processing based on an artificial neural network and Bayes' rule is proposed for fast radionuclide identification. The validation of this real-time approach was carried out using γ-emitter spectra ((241)Am, (133)Ba, (207)Bi, (60)Co, (137)Cs) obtained with a high-efficiency well-type NaI(Tl). The first tests showed that the proposed algorithm enables a fast identification of each γ-emitting radionuclide using the information given by the whole spectrum. Based on an iterative process, the on-line analysis only needs low-statistics spectra without energy calibration to identify the nature of a radiological threat. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Activity-Dependent Plasticity of Spike Pauses in Cerebellar Purkinje Cells

    Directory of Open Access Journals (Sweden)

    Giorgio Grasselli

    2016-03-01

    Full Text Available The plasticity of intrinsic excitability has been described in several types of neurons, but the significance of non-synaptic mechanisms in brain plasticity and learning remains elusive. Cerebellar Purkinje cells are inhibitory neurons that spontaneously fire action potentials at high frequencies and regulate activity in their target cells in the cerebellar nuclei by generating a characteristic spike burst-pause sequence upon synaptic activation. Using patch-clamp recordings from mouse Purkinje cells, we find that depolarization-triggered intrinsic plasticity enhances spike firing and shortens the duration of spike pauses. Pause plasticity is absent from mice lacking SK2-type potassium channels (SK2−/− mice and in occlusion experiments using the SK channel blocker apamin, while apamin wash-in mimics pause reduction. Our findings demonstrate that spike pauses can be regulated through an activity-dependent, exclusively non-synaptic, SK2 channel-dependent mechanism and suggest that pause plasticity—by altering the Purkinje cell output—may be crucial to cerebellar information storage and learning.

  19. Surround modulation characteristics of local field potential and spiking activity in primary visual cortex of cat.

    Science.gov (United States)

    Zhang, Li; Li, Bing

    2013-01-01

    In primary visual cortex, spiking activity that evoked by stimulus confined in receptive field can be modulated by surround stimulus. This center-surround interaction is hypothesized to be the basis of visual feature integration and segregation. Spiking output has been extensively reported to be surround suppressive. However, less is known about the modulation properties of the local field potential (LFP), which generally reflects synaptic inputs. We simultaneously recorded spiking activity and LFP in the area 17 of anesthetized cats to examine and compare their modulation characteristics. When the stimulus went beyond the classical receptive field, LFP exhibited decreased power along the gamma band (30-100 Hz) in most of our recording sites. Further investigation revealed that suppression of the LFP gamma mean power (gLFP) depended on the angle between the center and surround orientations. The strongest suppression was induced when center and surround orientations were parallel. Moreover, the surround influence of the gLFP exhibited an asymmetric spatial organization. These results demonstrate that the gLFP has similar but not identical surround modulation properties, as compared to the spiking activity. The spatiotemporal integration of LFP implies that the oscillation and synchronization of local synaptic inputs may have important functions in surround modulation.

  20. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    Science.gov (United States)

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  1. Dopamine-signalled reward predictions generated by competitive excitation and inhibition in a spiking neural network model

    Directory of Open Access Journals (Sweden)

    Paul eChorley

    2011-05-01

    Full Text Available Dopaminergic neurons in the mammalian substantia nigra displaycharacteristic phasic responses to stimuli which reliably predict thereceipt of primary rewards. These responses have been suggested toencode reward prediction-errors similar to those used in reinforcementlearning. Here, we propose a model of dopaminergic activity in whichprediction error signals are generated by the joint action ofshort-latency excitation and long-latency inhibition, in a networkundergoing dopaminergic neuromodulation of both spike-timing dependentsynaptic plasticity and neuronal excitability. In contrast toprevious models, sensitivity to recent events is maintained by theselective modification of specific striatal synapses, efferent tocortical neurons exhibiting stimulus-specific, temporally extendedactivity patterns. Our model shows, in the presence of significantbackground activity, (i a shift in dopaminergic response from rewardto reward predicting stimuli, (ii preservation of a response tounexpected rewards, and (iii a precisely-timed below-baseline dip inactivity observed when expected rewards are omitted.

  2. Distinct temporal spike and local field potential activities in the thalamic parafascicular nucleus of parkinsonian rats during rest and limb movement.

    Science.gov (United States)

    Wang, Min; Qu, Qingyang; He, Tingting; Li, Min; Song, Zhimin; Chen, Feiyu; Zhang, Xiao; Xie, Jinlu; Geng, Xiwen; Yang, Maoquan; Wang, Xiusong; Lei, Chengdong; Hou, Yabing

    2016-08-25

    Several studies have suggested that the thalamic centromedian-parafascicular (CM/PF or the PF in rodents) is implicated in the pathophysiology of Parkinson's disease (PD). However, inconsistent changes in the neuronal firing rate and pattern have been reported in parkinsonian animals. To investigate the impact of a dopaminergic cell lesion on PF extracellular discharge in behaving rats, the PF neural activities in the spike and local field potential (LFP) were recorded in unilaterally 6-hydroxydopamine- (6-OHDA) lesioned and neurologically intact control rats during rest and limb movement. During rest, the two PF neuronal subtypes was less spontaneously active, with no difference in the spike firing rates between the control and lesioned rats; only the lesioned rats reshaped their spike firing pattern. Furthermore, the simultaneously recorded LFP in the lesioned rats exhibited a significant increase in power at 12-35 and 35-70Hz and a decrease in power at 0.7-12Hz. During the execution of a voluntary movement, two subtypes of PF neurons were identified by a rapid increase in the discharge activity in both the control and lesioned rats. However, dopamine lesioning was associated with a decrease in neuronal spiking fire rate and reshaping in the firing pattern in the PF. The simultaneously recorded LFP activity exhibited a significant increase in power at 12-35Hz and a decrease in power at 0.7-12Hz compared with the control rats. These findings indicate that 6-OHDA induces modifications in PF spike and LFP activities in rats during rest and movement and suggest that PF dysfunction may be an important contributor to the pathophysiology of parkinsonian motor impairment. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. Fast-spiking interneurons of the rat ventral striatum: temporal coordination of activity with principal cells and responsiveness to reward.

    Science.gov (United States)

    Lansink, Carien S; Goltstein, Pieter M; Lankelma, Jan V; Pennartz, Cyriel M A

    2010-08-01

    Although previous in vitro studies revealed inhibitory synaptic connections of fast-spiking interneurons to principal cells in the striatum, uncertainty remains about the nature of the behavioural events that correlate with changes in interneuron activity and about the temporal coordination of interneuron firing with spiking of principal cells under natural conditions. Using in vivo tetrode recordings from the ventral striatum in freely moving rats, fast-spiking neurons were distinguished from putative medium-sized spiny neurons on the basis of their spike waveforms and rates. Cross-correlograms of fast-spiking and putative medium-sized spiny neuron firing patterns revealed a variety of temporal relationships, including peaks of concurrent firing and transient decrements in medium-sized spiny neuron spiking around fast-spiking unit activity. Notably, the onset of these decrements was mostly in advance of the fast-spiking unit firing. Many of these temporal relationships were dependent on the sleep-wake state. Coordinated activity was also found amongst pairs of the same phenotype, both fast-spiking units and putative medium-sized spiny neurons, which was often marked by a broad peak of concurrent firing. When studying fast-spiking neurons in a reward-searching task, they generally showed a pre-reward ramping increment in firing rate but a decrement specifically when the rat received reward. In conclusion, our data indicate that various forms of temporally coordinated activity exist amongst ventral striatal interneurons and principal cells, which cannot be explained by feed-forward inhibitory circuits alone. Furthermore, firing patterns of ventral striatal fast-spiking interneurons do not merely correlate with the general arousal state of the animal but display distinct reward-related changes in firing rate.

  4. Myelin plasticity, neural activity, and traumatic neural injury.

    Science.gov (United States)

    Kondiles, Bethany R; Horner, Philip J

    2018-02-01

    The possibility that adult organisms exhibit myelin plasticity has recently become a topic of great interest. Many researchers are exploring the role of myelin growth and adaptation in daily functions such as memory and motor learning. Here we consider evidence for three different potential categories of myelin plasticity: the myelination of previously bare axons, remodeling of existing sheaths, and the removal of a sheath with replacement by a new internode. We also review evidence that points to the importance of neural activity as a mechanism by which oligodendrocyte precursor cells (OPCs) are cued to differentiate into myelinating oligodendrocytes, which may potentially be an important component of myelin plasticity. Finally, we discuss demyelination in the context of traumatic neural injury and present an argument for altering neural activity as a potential therapeutic target for remyelination following injury. © 2017 Wiley Periodicals, Inc. Develop Neurobiol 78: 108-122, 2018. © 2017 Wiley Periodicals, Inc.

  5. Can Neural Activity Propagate by Endogenous Electrical Field?

    Science.gov (United States)

    Qiu, Chen; Shivacharan, Rajat S.; Zhang, Mingming

    2015-01-01

    It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2–6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5–5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds. SIGNIFICANCE STATEMENT Neural activity (waves or spikes) can propagate using well documented mechanisms such as synaptic transmission, gap junctions, or diffusion. However, the purpose of this paper is to provide an explanation for experimental data showing that neural signals can propagate by means other than synaptic

  6. A Multi-Channel Low-Power System-on-Chip for in Vivo Recording and Wireless Transmission of Neural Spikes

    Directory of Open Access Journals (Sweden)

    Alessandro Sottocornola Spinelli

    2012-09-01

    Full Text Available This paper reports a multi-channel neural spike recording system-on-chip with digital data compression and wireless telemetry. The circuit consists of 16 amplifiers, an analog time-division multiplexer, a single 8 bit analog-to-digital converter, a digital signal compression unit and a wireless transmitter. Although only 16 amplifiers are integrated in our current die version, the whole system is designed to work with 64, demonstrating the feasibility of a digital processing and narrowband wireless transmission of 64 neural recording channels. Compression of the raw data is achieved by detecting the action potentials (APs and storing 20 samples for each spike waveform. This compression method retains sufficiently high data quality to allow for single neuron identification (spike sorting. The 400 MHz transmitter employs a Manchester-Coded Frequency Shift Keying (MC-FSK modulator with low modulation index. In this way, a 1:25 Mbit/s data rate is delivered within a limited band of about 3 MHz. The chip is realized in a 0:35 m AMS CMOS process featuring a 3 V power supply with an area of 3:1 2:7 mm2. The achieved transmission range is over 10 m with an overall power consumption for 64 channels of 17:2 mW. This figure translates into a power budget of 269 W per channel, in line with published results but allowing a larger transmission distance and more efficient bandwidth occupation of the wireless link. The integrated circuit was mounted on a small and light board to be used during neuroscience experiments with freely-behaving rats. Powered by 2 AAA batteries, the system can continuously work for more than 100 hours allowing for long-lasting neural spike recordings.

  7. Influence of epileptic activity during sleep on cognitive performance in benign childhood epilepsy with centrotemporal spikes.

    Science.gov (United States)

    Nissenkorn, Andreea; Pappo, Adi; Feldmann, Yael; Heimer, Gali; Bar-Yosef, Omer; Tzadok, Michal; Polack, Orli; Bord, Ayelet; Levav, Miriam; Ben-Zeev, Bruria

    2017-11-01

    Benign childhood epilepsy with centrotemporal spikes is benign childhood epilepsy, presenting between 4 and 10 years of age, characterized by typical clinical and EEG findings. Despite excellent prognosis, there are reports of mild cognitive, language, fine motor and behavioral difficulties. In its atypical form - electrical status epilepticus during slow wave sleep, continuous epileptiform activity during sleep lead to severe neurocognitive deterioration. Our objective was to investigate the influence of abundant sleep epileptiform activity, not fulfilling the criteria for electrical status epilepticus during Slow Wave Sleep, discovered randomly in children without overt intellectual impairment. We retrospectively reviewed the charts and EEG's of 34 children with benign childhood epilepsy with centrotemporal spikes, who underwent neurocognitive evaluation. The neurocognitive battery included items in the following domains: attention span, memory, language, fine motor and behavior. Patients were divided into two groups according to the spike wave index on sleep EEG, with a cut-off point of 50%. The groups were compared regarding to neurocognitive performance. Children with epileptiform activity of more than 50%, were diagnosed at a significantly younger age (5.13 ± 1.94 years vs. 7.17 ± 2.45, p = 0.014 T test), had less controlled seizures and received more antiepileptic drugs. However, there was no difference in neurocognitive performance, except in fine motor tasks (Pegboard), where children with more abundant activity were scored lower (-0.79 ± 0.96 vs. 0.20 ± 1.05, p = 0.011, T test). Our study did not show negative cognitive effect of abundant epileptiform activity discovered randomly in children with benign childhood epilepsy with centrotemporal spikes, warranting aggressive treatment. Copyright © 2017 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  8. Deep Spiking Networks

    NARCIS (Netherlands)

    O'Connor, P.; Welling, M.

    2016-01-01

    We introduce an algorithm to do backpropagation on a spiking network. Our network is "spiking" in the sense that our neurons accumulate their activation into a potential over time, and only send out a signal (a "spike") when this potential crosses a threshold and the neuron is reset. Neurons only

  9. Cellular Origin of Spontaneous Ganglion Cell Spike Activity in Animal Models of Retinitis Pigmentosa

    Directory of Open Access Journals (Sweden)

    David J. Margolis

    2011-01-01

    Full Text Available Here we review evidence that loss of photoreceptors due to degenerative retinal disease causes an increase in the rate of spontaneous ganglion spike discharge. Information about persistent spike activity is important since it is expected to add noise to the communication between the eye and the brain and thus impact the design and effective use of retinal prosthetics for restoring visual function in patients blinded by disease. Patch-clamp recordings from identified types of ON and OFF retinal ganglion cells in the adult (36–210 d old rd1 mouse show that the ongoing oscillatory spike activity in both cell types is driven by strong rhythmic synaptic input from presynaptic neurons that is blocked by CNQX. The recurrent synaptic activity may arise in a negative feedback loop between a bipolar cell and an amacrine cell that exhibits resonant behavior and oscillations in membrane potential when the normal balance between excitation and inhibition is disrupted by the absence of photoreceptor input.

  10. A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks.

    Science.gov (United States)

    Zhang, Xu; Foderaro, Greg; Henriquez, Craig; Ferrari, Silvia

    2016-12-22

    Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.

  11. High baseline activity in inferior temporal cortex improves neural and behavioral discriminability during visual categorization

    Directory of Open Access Journals (Sweden)

    Nazli eEmadi

    2014-11-01

    Full Text Available Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (< 8 Hz oscillation in the spike train, prior and phase-locked to the stimulus onset, was correlated with increased gamma power and neuronal baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance.

  12. FNS: an event-driven spiking neural network framework for efficient simulations of large-scale brain models

    OpenAIRE

    Susi, Gianluca; Garces, Pilar; Cristini, Alessandro; Paracone, Emanuele; Salerno, Mario; Maestu, Fernando; Pereda, Ernesto

    2018-01-01

    Limitations in processing capabilities and memory of today's computers make spiking neuron-based (human) whole-brain simulations inevitably characterized by a compromise between bio-plausibility and computational cost. It translates into brain models composed of a reduced number of neurons and a simplified neuron's mathematical model. Taking advantage of the sparse character of brain-like computation, eventdriven technique allows us to carry out efficient simulation of large-scale Spiking Neu...

  13. Mapping, Learning, Visualization, Classification, and Understanding of fMRI Data in the NeuCube Evolving Spatiotemporal Data Machine of Spiking Neural Networks.

    Science.gov (United States)

    Kasabov, Nikola K; Doborjeh, Maryam Gholami; Doborjeh, Zohreh Gholami

    2017-04-01

    This paper introduces a new methodology for dynamic learning, visualization, and classification of functional magnetic resonance imaging (fMRI) as spatiotemporal brain data. The method is based on an evolving spatiotemporal data machine of evolving spiking neural networks (SNNs) exemplified by the NeuCube architecture [1]. The method consists of several steps: mapping spatial coordinates of fMRI data into a 3-D SNN cube (SNNc) that represents a brain template; input data transformation into trains of spikes; deep, unsupervised learning in the 3-D SNNc of spatiotemporal patterns from data; supervised learning in an evolving SNN classifier; parameter optimization; and 3-D visualization and model interpretation. Two benchmark case study problems and data are used to illustrate the proposed methodology-fMRI data collected from subjects when reading affirmative or negative sentences and another one-on reading a sentence or seeing a picture. The learned connections in the SNNc represent dynamic spatiotemporal relationships derived from the fMRI data. They can reveal new information about the brain functions under different conditions. The proposed methodology allows for the first time to analyze dynamic functional and structural connectivity of a learned SNN model from fMRI data. This can be used for a better understanding of brain activities and also for online generation of appropriate neurofeedback to subjects for improved brain functions. For example, in this paper, tracing the 3-D SNN model connectivity enabled us for the first time to capture prominent brain functional pathways evoked in language comprehension. We found stronger spatiotemporal interaction between left dorsolateral prefrontal cortex and left temporal while reading a negated sentence. This observation is obviously distinguishable from the patterns generated by either reading affirmative sentences or seeing pictures. The proposed NeuCube-based methodology offers also a superior classification accuracy

  14. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  15. Contribution of spiking activity in the primary auditory cortex to detection in noise.

    Science.gov (United States)

    Christison-Lagay, Kate L; Bennur, Sharath; Cohen, Yale E

    2017-12-01

    A fundamental problem in hearing is detecting a "target" stimulus (e.g., a friend's voice) that is presented with a noisy background (e.g., the din of a crowded restaurant). Despite its importance to hearing, a relationship between spiking activity and behavioral performance during such a "detection-in-noise" task has yet to be fully elucidated. In this study, we recorded spiking activity in primary auditory cortex (A1) while rhesus monkeys detected a target stimulus that was presented with a noise background. Although some neurons were modulated, the response of the typical A1 neuron was not modulated by the stimulus- and task-related parameters of our task. In contrast, we found more robust representations of these parameters in population-level activity: small populations of neurons matched the monkeys' behavioral sensitivity. Overall, these findings are consistent with the hypothesis that the sensory evidence, which is needed to solve such detection-in-noise tasks, is represented in population-level A1 activity and may be available to be read out by downstream neurons that are involved in mediating this task.NEW & NOTEWORTHY This study examines the contribution of A1 to detecting a sound that is presented with a noisy background. We found that population-level A1 activity, but not single neurons, could provide the evidence needed to make this perceptual decision. Copyright © 2017 the American Physiological Society.

  16. Phase shift in the 24-hour rhythm of hippocampal EEG spiking activity in a rat model of temporal lobe epilepsy.

    Science.gov (United States)

    Stanley, David A; Talathi, Sachin S; Parekh, Mansi B; Cordiner, Daniel J; Zhou, Junli; Mareci, Thomas H; Ditto, William L; Carney, Paul R

    2013-09-01

    For over a century epileptic seizures have been known to cluster at specific times of the day. Recent studies have suggested that the circadian regulatory system may become permanently altered in epilepsy, but little is known about how this affects neural activity and the daily pattern of seizures. To investigate, we tracked long-term changes in the rate of spontaneous hippocampal EEG spikes (SPKs) in a rat model of temporal lobe epilepsy. In healthy animals, SPKs oscillated with near 24-h period; however, after injury by status epilepticus, a persistent phase shift of ∼12 h emerged in animals that later went on to develop chronic spontaneous seizures. Additional measurements showed that global 24-h rhythms, including core body temperature and theta state transitions, did not phase shift. Instead, we hypothesized that locally impaired circadian input to the hippocampus might be responsible for the SPK phase shift. This was investigated with a biophysical computer model in which we showed that subtle changes in the relative strengths of circadian input could produce a phase shift in hippocampal neural activity. MRI provided evidence that the medial septum, a putative circadian relay center for the hippocampus, exhibits signs of damage and therefore could contribute to local circadian impairment. Our results suggest that balanced circadian input is critical to maintaining natural circadian phase in the hippocampus and that damage to circadian relay centers, such as the medial septum, may disrupt this balance. We conclude by discussing how abnormal circadian regulation may contribute to the daily rhythms of epileptic seizures and related cognitive dysfunction.

  17. Conflict Resolution as Near-Threshold Decision-Making: A Spiking Neural Circuit Model with Two-Stage Competition for Antisaccadic Task.

    Science.gov (United States)

    Lo, Chung-Chuan; Wang, Xiao-Jing

    2016-08-01

    Automatic responses enable us to react quickly and effortlessly, but they often need to be inhibited so that an alternative, voluntary action can take place. To investigate the brain mechanism of controlled behavior, we investigated a biologically-based network model of spiking neurons for inhibitory control. In contrast to a simple race between pro- versus anti-response, our model incorporates a sensorimotor remapping module, and an action-selection module endowed with a "Stop" process through tonic inhibition. Both are under the modulation of rule-dependent control. We tested the model by applying it to the well known antisaccade task in which one must suppress the urge to look toward a visual target that suddenly appears, and shift the gaze diametrically away from the target instead. We found that the two-stage competition is crucial for reproducing the complex behavior and neuronal activity observed in the antisaccade task across multiple brain regions. Notably, our model demonstrates two types of errors: fast and slow. Fast errors result from failing to inhibit the quick automatic responses and therefore exhibit very short response times. Slow errors, in contrast, are due to incorrect decisions in the remapping process and exhibit long response times comparable to those of correct antisaccade responses. The model thus reveals a circuit mechanism for the empirically observed slow errors and broad distributions of erroneous response times in antisaccade. Our work suggests that selecting between competing automatic and voluntary actions in behavioral control can be understood in terms of near-threshold decision-making, sharing a common recurrent (attractor) neural circuit mechanism with discrimination in perception.

  18. Brainlab: a Python toolkit to aid in the design, simulation, and analysis of spiking neural networks with the NeoCortical Simulator

    Directory of Open Access Journals (Sweden)

    Richard P Drewes

    2009-05-01

    Full Text Available Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading ``glue'' tool for managing all sorts of complex programmatictasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS environment in particular. Brainlab is an integrated model building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS (the NeoCortical Simulator.

  19. Brainlab: A Python Toolkit to Aid in the Design, Simulation, and Analysis of Spiking Neural Networks with the NeoCortical Simulator.

    Science.gov (United States)

    Drewes, Rich; Zou, Quan; Goodman, Philip H

    2009-01-01

    Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading "glue" tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS.

  20. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    Science.gov (United States)

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  2. How connectivity, background activity, and synaptic properties shape the cross-correlation between spike trains.

    Science.gov (United States)

    Ostojic, Srdjan; Brunel, Nicolas; Hakim, Vincent

    2009-08-19

    Functional interactions between neurons in vivo are often quantified by cross-correlation functions (CCFs) between their spike trains. It is therefore essential to understand quantitatively how CCFs are shaped by different factors, such as connectivity, synaptic parameters, and background activity. Here, we study the CCF between two neurons using analytical calculations and numerical simulations. We quantify the role of synaptic parameters, such as peak conductance, decay time, and reversal potential, and analyze how various patterns of connectivity influence CCF shapes. In particular, we find that the symmetry of the CCF distinguishes in general, but not always, the case of shared inputs between two neurons from the case in which they are directly synaptically connected. We systematically examine the influence of background synaptic inputs from the surrounding network that set the baseline firing statistics of the neurons and modulate their response properties. We find that variations in the background noise modify the amplitude of the cross-correlation function as strongly as variations of synaptic strength. In particular, we show that the postsynaptic neuron spiking regularity has a pronounced influence on CCF amplitude. This suggests an efficient and flexible mechanism for modulating functional interactions.

  3. Spike-based population coding and working memory.

    Directory of Open Access Journals (Sweden)

    Martin Boerlin

    2011-02-01

    Full Text Available Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.

  4. Fractal dimension analysis for spike detection in low SNR extracellular signals.

    Science.gov (United States)

    Salmasi, Mehrdad; Büttner, Ulrich; Glasauer, Stefan

    2016-06-01

    Many algorithms have been suggested for detection and sorting of spikes in extracellular recording. Nevertheless, it is still challenging to detect spikes in low signal-to-noise ratios (SNR). We propose a spike detection algorithm that is based on the fractal properties of extracellular signals and can detect spikes in low SNR regimes. Semi-intact spikes are low-amplitude spikes whose shapes are almost preserved. The detection of these spikes can significantly enhance the performance of multi-electrode recording systems. Semi-intact spikes are simulated by adding three noise components to a spike train: thermal noise, inter-spike noise, and spike-level noise. We show that simulated signals have fractal properties which make them proper candidates for fractal analysis. Then we use fractal dimension as the main core of our spike detection algorithm and call it fractal detector. The performance of the fractal detector is compared with three frequently used spike detectors. We demonstrate that in low SNR, the fractal detector has the best performance and results in the highest detection probability. It is shown that, in contrast to the other three detectors, the performance of the fractal detector is independent of inter-spike noise power and that variations in spike shape do not alter its performance. Finally, we use the fractal detector for spike detection in experimental data and similar to simulations, it is shown that the fractal detector has the best performance in low SNR regimes. The detection of low-amplitude spikes provides more information about the neural activity in the vicinity of the recording electrodes. Our results suggest using the fractal detector as a reliable and robust method for detecting semi-intact spikes in low SNR extracellular signals.

  5. Genetic control of active neural circuits

    Directory of Open Access Journals (Sweden)

    Leon Reijmers

    2009-12-01

    Full Text Available The use of molecular tools to study the neurobiology of complex behaviors has been hampered by an inability to target the desired changes to relevant groups of neurons. Specific memories and specific sensory representations are sparsely encoded by a small fraction of neurons embedded in a sea of morphologically and functionally similar cells. In this review we discuss genetics techniques that are being developed to address this difficulty. In several studies the use of promoter elements that are responsive to neural activity have been used to drive long lasting genetic alterations into neural ensembles that are activated by natural environmental stimuli. This approach has been used to examine neural activity patterns during learning and retrieval of a memory, to examine the regulation of receptor trafficking following learning and to functionally manipulate a specific memory trace. We suggest that these techniques will provide a general approach to experimentally investigate the link between patterns of environmentally activated neural firing and cognitive processes such as perception and memory.

  6. Neural activation in stress-related exhaustion

    DEFF Research Database (Denmark)

    Gavelin, Hanna Malmberg; Neely, Anna Stigsdotter; Andersson, Micael

    2017-01-01

    The primary purpose of this study was to investigate the association between burnout and neural activation during working memory processing in patients with stress-related exhaustion. Additionally, we investigated the neural effects of cognitive training as part of stress rehabilitation. Fifty......-five patients with clinical diagnosis of exhaustion disorder were administered the n-back task during fMRI scanning at baseline. Ten patients completed a 12-week cognitive training intervention, as an addition to stress rehabilitation. Eleven patients served as a treatment-as-usual control group. At baseline...

  7. A fast L(p) spike alignment metric.

    Science.gov (United States)

    Dubbs, Alexander J; Seiler, Brad A; Magnasco, Marcelo O

    2010-11-01

    The metrization of the space of neural responses is an ongoing research program seeking to find natural ways to describe, in geometrical terms, the sets of possible activities in the brain. One component of this program is spike metrics-notions of distance between two spike trains recorded from a neuron. Alignment spike metrics work by identifying "equivalent" spikes in both trains. We present an alignment spike metric having L(p) underlying geometrical structure; the L(2) version is Euclidean and is suitable for further embedding in Euclidean spaces by multidimensional scaling methods or related procedures. We show how to implement a fast algorithm for the computation of this metric based on bipartite graph matching theory.

  8. Traditional waveform based spike sorting yields biased rate code estimates.

    Science.gov (United States)

    Ventura, Valérie

    2009-04-28

    Much of neuroscience has to do with relating neural activity and behavior or environment. One common measure of this relationship is the firing rates of neurons as functions of behavioral or environmental parameters, often called tuning functions and receptive fields. Firing rates are estimated from the spike trains of neurons recorded by electrodes implanted in the brain. Individual neurons' spike trains are not typically readily available, because the signal collected at an electrode is often a mixture of activities from different neurons and noise. Extracting individual neurons' spike trains from voltage signals, which is known as spike sorting, is one of the most important data analysis problems in neuroscience, because it has to be undertaken prior to any analysis of neurophysiological data in which more than one neuron is believed to be recorded on a single electrode. All current spike-sorting methods consist of clustering the characteristic spike waveforms of neurons. The sequence of first spike sorting based on waveforms, then estimating tuning functions, has long been the accepted way to proceed. Here, we argue that the covariates that modulate tuning functions also contain information about spike identities, and that if tuning information is ignored for spike sorting, the resulting tuning function estimates are biased and inconsistent, unless spikes can be classified with perfect accuracy. This means, for example, that the commonly used peristimulus time histogram is a biased estimate of the firing rate of a neuron that is not perfectly isolated. We further argue that the correct conceptual way to view the problem out is to note that spike sorting provides information about rate estimation and vice versa, so that the two relationships should be considered simultaneously rather than sequentially. Indeed we show that when spike sorting and tuning-curve estimation are performed in parallel, unbiased estimates of tuning curves can be recovered even from

  9. Extracellular ATP induces spikes in cytosolic free Ca(2+) but not in NADPH oxidase activity in neutrophils

    DEFF Research Database (Denmark)

    Brasen, Jens Christian; Olsen, Lars Folke; Hallett, Maurice B

    2011-01-01

    , we show that the generation of reactive oxygen species by neutrophils adherent to glass was accelerated by ATP. The step-up in NADPH oxidase activity followed the first elevation of cytosolic Ca(2+) but, despite subsequent spikes in Ca(2+) concentration, no oscillations in oxidase activity could...

  10. Integration and transmission of distributed deterministic neural activity in feed-forward networks.

    Science.gov (United States)

    Asai, Yoshiyuki; Villa, Alessandro E P

    2012-01-24

    A ten layer feed-forward network characterized by diverging/converging patterns of projection between successive layers of regular spiking (RS) neurons is activated by an external spatiotemporal input pattern fed to Layer 1 in presence of stochastic background activities fed to all layers. We used three dynamical systems to derive the external input spike trains including the temporal information, and three types of neuron models for the network, i.e. either a network formed either by neurons modeled by exponential integrate-and-fire dynamics (RS-EIF, Fourcaud-Trocmé et al., 2003), or by simple spiking neurons (RS-IZH, Izhikevich, 2004) or by multiple-timescale adaptive threshold neurons (RS-MAT, Kobayashi et al., 2009), given five intensities for the background activity. The assessment of the temporal structure embedded in the output spike trains was carried out by detecting the preferred firing sequences for the reconstruction of de-noised spike trains (Asai and Villa, 2008). We confirmed that the RS-MAT model is likely to be more efficient in integrating and transmitting the temporal structure embedded in the external input. We observed that this structure could be propagated not only up to the 10th layer but in some cases it was retained better beyond the 4th downstream layers. This study suggests that diverging/converging network structures, by the propagation of synfire activity, could play a key role in the transmission of complex temporal patterns of discharges associated to deterministic nonlinear activity. This article is part of a Special Issue entitled Neural Coding. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Sensitive red protein calcium indicators for imaging neural activity

    OpenAIRE

    Dana, Hod; Mohar, Boaz; Sun, Yi; Narayan, Sujatha; Gordus, Andrew; Hasseman, Jeremy P; Tsegaye, Getahun; Holt, Graham T.; Hu, Amy; Walpita, Deepika; Patel, Ronak; Macklin, John J.; Bargmann, Cornelia I; Ahrens, Misha B.; Schreiter, Eric R

    2016-01-01

    eLife digest Neurons encode information with brief electrical pulses called spikes. Monitoring spikes in large populations of neurons is a powerful method for studying how networks of neurons process information and produce behavior. This activity can be detected using fluorescent protein indicators, or ?probes?, which light up when neurons are active. The best existing probes produce green fluorescence. However, red fluorescent probes would allow us to see deeper into the brain, and could al...

  12. Glutamate gated spiking Neuron Model.

    Science.gov (United States)

    Deka, Krisha M; Roy, Soumik

    2014-01-01

    Biological neuron models mainly analyze the behavior of neural networks. Neurons are described in terms of firing rates viz an analog signal. The Izhikevich neuron model is an efficient, powerful model of spiking neuron. This model is a reduction of Hodgkin-Huxley model to a two variable system and is capable of producing rich firing patterns for many biological neurons. In this paper, the Regular Spiking (RS) neuron firing pattern is used to simulate the spiking of Glutamate gated postsynaptic membrane. Simulation is done in MATLAB environment for excitatory action of synapses. Analogous simulation of spiking of excitatory postsynaptic membrane potential is obtained.

  13. The Ising decoder: reading out the activity of large neural ensembles.

    Science.gov (United States)

    Schaub, Michael T; Schultz, Simon R

    2012-02-01

    The Ising model has recently received much attention for the statistical description of neural spike train data. In this paper, we propose and demonstrate its use for building decoders capable of predicting, on a millisecond timescale, the stimulus represented by a pattern of neural activity. After fitting to a training dataset, the Ising decoder can be applied "online" for instantaneous decoding of test data. While such models can be fit exactly using Boltzmann learning, this approach rapidly becomes computationally intractable as neural ensemble size increases. We show that several approaches, including the Thouless-Anderson-Palmer (TAP) mean field approach from statistical physics, and the recently developed Minimum Probability Flow Learning (MPFL) algorithm, can be used for rapid inference of model parameters in large-scale neural ensembles. Use of the Ising model for decoding, unlike other problems such as functional connectivity estimation, requires estimation of the partition function. As this involves summation over all possible responses, this step can be limiting. Mean field approaches avoid this problem by providing an analytical expression for the partition function. We demonstrate these decoding techniques by applying them to simulated neural ensemble responses from a mouse visual cortex model, finding an improvement in decoder performance for a model with heterogeneous as opposed to homogeneous neural tuning and response properties. Our results demonstrate the practicality of using the Ising model to read out, or decode, spatial patterns of activity comprised of many hundreds of neurons.

  14. Short-Range Temporal Interactions in Sleep; Hippocampal Spike Avalanches Support a Large Milieu of Sequential Activity Including Replay.

    Directory of Open Access Journals (Sweden)

    J Matthew Mahoney

    Full Text Available Hippocampal neural systems consolidate multiple complex behaviors into memory. However, the temporal structure of neural firing supporting complex memory consolidation is unknown. Replay of hippocampal place cells during sleep supports the view that a simple repetitive behavior modifies sleep firing dynamics, but does not explain how multiple episodes could be integrated into associative networks for recollection during future cognition. Here we decode sequential firing structure within spike avalanches of all pyramidal cells recorded in sleeping rats after running in a circular track. We find that short sequences that combine into multiple long sequences capture the majority of the sequential structure during sleep, including replay of hippocampal place cells. The ensemble, however, is not optimized for maximally producing the behavior-enriched episode. Thus behavioral programming of sequential correlations occurs at the level of short-range interactions, not whole behavioral sequences and these short sequences are assembled into a large and complex milieu that could support complex memory consolidation.

  15. Neural predictive control for active buffet alleviation

    Science.gov (United States)

    Pado, Lawrence E.; Lichtenwalner, Peter F.; Liguore, Salvatore L.; Drouin, Donald

    1998-06-01

    The adaptive neural control of aeroelastic response (ANCAR) and the affordable loads and dynamics independent research and development (IRAD) programs at the Boeing Company jointly examined using neural network based active control technology for alleviating undesirable vibration and aeroelastic response in a scale model aircraft vertical tail. The potential benefits of adaptive control includes reducing aeroelastic response associated with buffet and atmospheric turbulence, increasing flutter margins, and reducing response associated with nonlinear phenomenon like limit cycle oscillations. By reducing vibration levels and thus loads, aircraft structures can have lower acquisition cost, reduced maintenance, and extended lifetimes. Wind tunnel tests were undertaken on a rigid 15% scale aircraft in Boeing's mini-speed wind tunnel, which is used for testing at very low air speeds up to 80 mph. The model included a dynamically scaled flexible fail consisting of an aluminum spar with balsa wood cross sections with a hydraulically powered rudder. Neural predictive control was used to actuate the vertical tail rudder in response to strain gauge feedback to alleviate buffeting effects. First mode RMS strain reduction of 50% was achieved. The neural predictive control system was developed and implemented by the Boeing Company to provide an intelligent, adaptive control architecture for smart structures applications with automated synthesis, self-optimization, real-time adaptation, nonlinear control, and fault tolerance capabilities. It is designed to solve complex control problems though a process of automated synthesis, eliminating costly control design and surpassing it in many instances by accounting for real world non-linearities.

  16. A Multiple-Plasticity Spiking Neural Network Embedded in a Closed-Loop Control System to Model Cerebellar Pathologies.

    Science.gov (United States)

    Geminiani, Alice; Casellato, Claudia; Antonietti, Alberto; D'Angelo, Egidio; Pedrocchi, Alessandra

    2017-01-10

    The cerebellum plays a crucial role in sensorimotor control and cerebellar disorders compromise adaptation and learning of motor responses. However, the link between alterations at network level and cerebellar dysfunction is still unclear. In principle, this understanding would benefit of the development of an artificial system embedding the salient neuronal and plastic properties of the cerebellum and operating in closed-loop. To this aim, we have exploited a realistic spiking computational model of the cerebellum to analyze the network correlates of cerebellar impairment. The model was modified to reproduce three different damages of the cerebellar cortex: (i) a loss of the main output neurons (Purkinje Cells), (ii) a lesion to the main cerebellar afferents (Mossy Fibers), and (iii) a damage to a major mechanism of synaptic plasticity (Long Term Depression). The modified network models were challenged with an Eye-Blink Classical Conditioning test, a standard learning paradigm used to evaluate cerebellar impairment, in which the outcome was compared to reference results obtained in human or animal experiments. In all cases, the model reproduced the partial and delayed conditioning typical of the pathologies, indicating that an intact cerebellar cortex functionality is required to accelerate learning by transferring acquired information to the cerebellar nuclei. Interestingly, depending on the type of lesion, the redistribution of synaptic plasticity and response timing varied greatly generating specific adaptation patterns. Thus, not only the present work extends the generalization capabilities of the cerebellar spiking model to pathological cases, but also predicts how changes at the neuronal level are distributed across the network, making it usable to infer cerebellar circuit alterations occurring in cerebellar pathologies.

  17. Cultured neural networks: Optimisation of patterned network adhesiveness and characterisation of their neural activity

    NARCIS (Netherlands)

    Rutten, Wim; Ruardij, T.G.; Marani, Enrico; Roelofsen, B.H.

    2006-01-01

    One type of future, improved neural interface is the "cultured probe"?. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA) on a planar substrate, each electrode being covered and

  18. Activity Patterns of Cultured Neural Networks on Micro Electrode Arrays

    National Research Council Canada - National Science Library

    Rutten, Wim

    2001-01-01

    A hybrid neuro-electronic interface is a cell-cultured micro electrode array, acting as a neural information transducer for stimulation and/or recording of neural activity in the brain or the spinal cord...

  19. Neural control and precision of flight muscle activation in Drosophila.

    Science.gov (United States)

    Lehmann, Fritz-Olaf; Bartussek, Jan

    2017-01-01

    Precision of motor commands is highly relevant in a large context of various locomotor behaviors, including stabilization of body posture, heading control and directed escape responses. While posture stability and heading control in walking and swimming animals benefit from high friction via ground reaction forces and elevated viscosity of water, respectively, flying animals have to cope with comparatively little aerodynamic friction on body and wings. Although low frictional damping in flight is the key to the extraordinary aerial performance and agility of flying birds, bats and insects, it challenges these animals with extraordinary demands on sensory integration and motor precision. Our review focuses on the dynamic precision with which Drosophila activates its flight muscular system during maneuvering flight, considering relevant studies on neural and muscular mechanisms of thoracic propulsion. In particular, we tackle the precision with which flies adjust power output of asynchronous power muscles and synchronous flight control muscles by monitoring muscle calcium and spike timing within the stroke cycle. A substantial proportion of the review is engaged in the significance of visual and proprioceptive feedback loops for wing motion control including sensory integration at the cellular level. We highlight that sensory feedback is the basis for precise heading control and body stability in flies.

  20. The SARS coronavirus spike glycoprotein is selectively recognized by lung surfactant protein D and activates macrophages

    DEFF Research Database (Denmark)

    Leth-Larsen, Rikke; Zhong, Fei; Chow, Vincent T K

    2007-01-01

    Da glycosylated protein. It was not secreted in the presence of tunicamycin and was detected as a 130 kDa protein in the cell lysate. The purified S-protein bound to Vero but not 293T cells and was itself recognized by lung surfactant protein D (SP-D), a collectin found in the lung alveoli. The binding required......The severe acute respiratory syndrome coronavirus (SARS-CoV) infects host cells with its surface glycosylated spike-protein (S-protein). Here we expressed the SARS-CoV S-protein to investigate its interactions with innate immune mechanisms in the lung. The purified S-protein was detected as a 210 k...... Ca(2+) and was inhibited by maltose. The serum collectin, mannan-binding lectin (MBL), exhibited no detectable binding to the purified S-protein. S-protein binds and activates macrophages but not dendritic cells (DCs). It suggests that SARS-CoV interacts with innate immune mechanisms in the lung...

  1. Coding stimulus amplitude by correlated neural activity.

    Science.gov (United States)

    Metzen, Michael G; Ávila-Åkerberg, Oscar; Chacron, Maurice J

    2015-04-01

    While correlated activity is observed ubiquitously in the brain, its role in neural coding has remained controversial. Recent experimental results have demonstrated that correlated but not single-neuron activity can encode the detailed time course of the instantaneous amplitude (i.e., envelope) of a stimulus. These have furthermore demonstrated that such coding required and was optimal for a nonzero level of neural variability. However, a theoretical understanding of these results is still lacking. Here we provide a comprehensive theoretical framework explaining these experimental findings. Specifically, we use linear response theory to derive an expression relating the correlation coefficient to the instantaneous stimulus amplitude, which takes into account key single-neuron properties such as firing rate and variability as quantified by the coefficient of variation. The theoretical prediction was in excellent agreement with numerical simulations of various integrate-and-fire type neuron models for various parameter values. Further, we demonstrate a form of stochastic resonance as optimal coding of stimulus variance by correlated activity occurs for a nonzero value of noise intensity. Thus, our results provide a theoretical explanation of the phenomenon by which correlated but not single-neuron activity can code for stimulus amplitude and how key single-neuron properties such as firing rate and variability influence such coding. Correlation coding by correlated but not single-neuron activity is thus predicted to be a ubiquitous feature of sensory processing for neurons responding to weak input.

  2. Spike sorting for polytrodes: a divide and conquer approach

    Directory of Open Access Journals (Sweden)

    Nicholas V. Swindale

    2014-02-01

    Full Text Available In order to determine patterns of neural activity, spike signals recorded by extracellular electrodes have to be clustered (sorted with the aim of ensuring that each cluster represents all the spikes generated by an individual neuron. Many methods for spike sorting have been proposed but few are easily applicable to recordings from polytrodes which may have 16 or more recording sites. As with tetrodes, these are spaced sufficiently closely that signals from single neurons will usually be recorded on several adjacent sites. Although this offers a better chance of distinguishing neurons with similarly shaped spikes, sorting is difficult in such cases because of the high dimensionality of the space in which the signals must be classified. This report details a method for spike sorting based on a divide and conquer approach. Clusters are initially formed by assigning each event to the channel on which it is largest. Each channel-based cluster is then sub-divided into as many distinct clusters as possible. These are then recombined on the basis of pairwise tests into a final set of clusters. Pairwise tests are also performed to establish how distinct each cluster is from the others. A modified gradient ascent clustering (GAC algorithm is used to do the clustering. The method can sort spikes with minimal user input in times comparable to real time for recordings lasting up to 45 minutes. Our results illustrate some of the difficulties inherent in spike sorting, including changes in spike shape over time. We show that some physiologically distinct units may have very similar spike shapes. We show that RMS measures of spike shape similarity are not sensitive enough to discriminate clusters that can otherwise be separated by principal components analysis. Hence spike sorting based on least-squares matching to templates may be unreliable. Our methods should be applicable to tetrodes and scaleable to larger multi-electrode arrays (MEAs.

  3. A Generalized PWC Spiking Neuron Model and Its Neuron-Like Activities and Burst-Related Bifurcations

    Science.gov (United States)

    Yamashita, Yutaro; Torikai, Hiroyuki

    A generalized version of a piece-wise constant (ab. PWC) spiking neuron model is presented. It is shown that the generalization enables the model to reproduce 20 activities in the Izhikevich model. Among the activities, we analyze tonic bursting. Using an analytical one-dimensional iterative map, it is shown that the model can reproduce a burst-related bifurcation scenario, which is qualitatively similar to that of the Izhikevich model. The bifurcation scenario can be observed in an actual hardware.

  4. Dichotomous Effects of Mu Opioid Receptor Activation on Striatal Low-Threshold Spike Interneurons

    Directory of Open Access Journals (Sweden)

    Rasha Elghaba

    2017-12-01

    Full Text Available Striatal low-threshold spike interneurons (LTSIs are tonically active neurons that express GABA and nitric oxide synthase and are involved in information processing as well as neurovascular coupling. While mu opioid receptors (MORs and their ligand encephalin are prominent in the striatum, their action on LTSIs has not been investigated. We addressed this issue carrying out whole-cell recordings in transgenic mice in which the NPY-expressing neurons are marked with green fluorescent protein (GFP. The MOR agonist (D-Ala(2, N-MePhe(4, Gly-ol-enkephalin (DAMGO produced dual effects on subpopulations of LTSIs. DAMGO caused inhibitory effects, accompanied by decreases of spontaneous firing, in 62% of LTSIs, while depolarizing effects (accompanied by an increase in spontaneous firing were observed in 23% of LTSIs tested. The dual effects of DAMGO persisted in the presence of tetrodotoxin (TTX, a sodium channel blocker or in the presence of the nicotinic acetylcholine receptor antagonist mecamylamine. However, in the presence of either the GABAA receptor antagonist picrotoxin or the muscarinic cholinergic receptor antagonist atropine, DAMGO only elicited inhibitory effects on LTSIs. Furthermore, we found that DAMGO decreased the amplitude and frequency of spontaneous GABAergic events. Unexpectedly, these effects of DAMGO on spontaneous GABAergic events disappeared after blocking of the muscarinic and nicotinic cholinergic blockers, showing that GABA inputs to LTSIs are not directly modulated by presynaptic MORs. These finding suggest that activation of MORs affect LTSIs both directly and indirectly, through modulation of GABAergic and cholinergic tones. The complex balance between direct and indirect effects determines the net effect of DAMGO on LTSIs.

  5. Distributed dynamical computation in neural circuits with propagating coherent activity patterns.

    Directory of Open Access Journals (Sweden)

    Pulin Gong

    2009-12-01

    Full Text Available Activity in neural circuits is spatiotemporally organized. Its spatial organization consists of multiple, localized coherent patterns, or patchy clusters. These patterns propagate across the circuits over time. This type of collective behavior has ubiquitously been observed, both in spontaneous activity and evoked responses; its function, however, has remained unclear. We construct a spatially extended, spiking neural circuit that generates emergent spatiotemporal activity patterns, thereby capturing some of the complexities of the patterns observed empirically. We elucidate what kind of fundamental function these patterns can serve by showing how they process information. As self-sustained objects, localized coherent patterns can signal information by propagating across the neural circuit. Computational operations occur when these emergent patterns interact, or collide with each other. The ongoing behaviors of these patterns naturally embody both distributed, parallel computation and cascaded logical operations. Such distributed computations enable the system to work in an inherently flexible and efficient way. Our work leads us to propose that propagating coherent activity patterns are the underlying primitives with which neural circuits carry out distributed dynamical computation.

  6. Dopamine D4 receptor activation increases hippocampal gamma oscillations by enhancing synchronization of fast-spiking interneurons.

    Directory of Open Access Journals (Sweden)

    Richard Andersson

    Full Text Available BACKGROUND: Gamma oscillations are electric activity patterns of the mammalian brain hypothesized to serve attention, sensory perception, working memory and memory encoding. They are disrupted or altered in schizophrenic patients with associated cognitive deficits, which persist in spite of treatment with antipsychotics. Because cognitive symptoms are a core feature of schizophrenia it is relevant to explore signaling pathways that potentially regulate gamma oscillations. Dopamine has been reported to decrease gamma oscillation power via D1-like receptors. Based on the expression pattern of D4 receptors (D4R in hippocampus, and pharmacological effects of D4R ligands in animals, we hypothesize that they are in a position to regulate gamma oscillations as well. METHODOLOGY/PRINCIPAL FINDINGS: To address this hypothesis we use rat hippocampal slices and kainate-induced gamma oscillations. Local field potential recordings as well as intracellular recordings of pyramidal cells, fast-spiking and non-fast-spiking interneurons were carried out. We show that D4R activation with the selective ligand PD168077 increases gamma oscillation power, which can be blocked by the D4R-specific antagonist L745,870 as well as by the antipsychotic drug Clozapine. Pyramidal cells did not exhibit changes in excitatory or inhibitory synaptic current amplitudes, but inhibitory currents became more coherent with the oscillations after application of PD168077. Fast-spiking, but not non-fast spiking, interneurons, increase their action potential phase-coupling and coherence with regard to ongoing gamma oscillations in response to D4R activation. Among several possible mechanisms we found that the NMDA receptor antagonist AP5 also blocks the D4R mediated increase in gamma oscillation power. CONCLUSIONS/SIGNIFICANCE: We conclude that D4R activation affects fast-spiking interneuron synchronization and thereby increases gamma power by an NMDA receptor-dependent mechanism. This

  7. Local Activity and Causal Connectivity in Children with Benign Epilepsy with Centrotemporal Spikes.

    Directory of Open Access Journals (Sweden)

    Yun Wu

    Full Text Available The aim of the current study was to localize the epileptic focus and characterize its causal relation with other brain regions, to understand the cognitive deficits in children with benign childhood epilepsy with centrotemporal spikes (BECTS. Resting-state functional magnetic resonance imaging (fMRI was performed in 37 children with BECTS and 25 children matched for age, sex and educational achievement. We identified the potential epileptogenic zone (EZ by comparing the amplitude of low frequency fluctuation (ALFF of spontaneous blood oxygenation level dependent fMRI signals between the groups. Granger causality analysis was applied to explore the causal effect between EZ and the whole brain. Compared with controls, children with BECTS had significantly increased ALFF in the right postcentral gyrus and bilateral calcarine, and decreased ALFF in the left anterior cingulate cortex, bilateral putaman/caudate, and left cerebellum. ALFF values in the putaman/caudate were positively correlated with verbal IQ scores in patients. The ALFF values in cerebellum and performance IQ scores were negatively correlated in patients. These results suggest that ALFF disturbances in the putaman/caudate and cerebellum play an important role in BECTS cognitive dysfunction. Compared with controls, the patients showed increased driving effect from the EZ to the right medial frontal cortex and posterior cingulate cortex and decreased causal effects from the EZ to left inferior frontal gyrus. The causal effect of the left inferior frontal gyrus negatively correlated with disease duration, which suggests a relation between the epileptiform activity and language impairment. All together, these findings provide additional insight into the neurophysiological mechanisms of epilepitogenisis and cognitive dysfunction associated with BECTS.

  8. Local Activity and Causal Connectivity in Children with Benign Epilepsy with Centrotemporal Spikes.

    Science.gov (United States)

    Wu, Yun; Ji, Gong-Jun; Zang, Yu-Feng; Liao, Wei; Jin, Zhen; Liu, Ya-Li; Li, Ke; Zeng, Ya-Wei; Fang, Fang

    2015-01-01

    The aim of the current study was to localize the epileptic focus and characterize its causal relation with other brain regions, to understand the cognitive deficits in children with benign childhood epilepsy with centrotemporal spikes (BECTS). Resting-state functional magnetic resonance imaging (fMRI) was performed in 37 children with BECTS and 25 children matched for age, sex and educational achievement. We identified the potential epileptogenic zone (EZ) by comparing the amplitude of low frequency fluctuation (ALFF) of spontaneous blood oxygenation level dependent fMRI signals between the groups. Granger causality analysis was applied to explore the causal effect between EZ and the whole brain. Compared with controls, children with BECTS had significantly increased ALFF in the right postcentral gyrus and bilateral calcarine, and decreased ALFF in the left anterior cingulate cortex, bilateral putaman/caudate, and left cerebellum. ALFF values in the putaman/caudate were positively correlated with verbal IQ scores in patients. The ALFF values in cerebellum and performance IQ scores were negatively correlated in patients. These results suggest that ALFF disturbances in the putaman/caudate and cerebellum play an important role in BECTS cognitive dysfunction. Compared with controls, the patients showed increased driving effect from the EZ to the right medial frontal cortex and posterior cingulate cortex and decreased causal effects from the EZ to left inferior frontal gyrus. The causal effect of the left inferior frontal gyrus negatively correlated with disease duration, which suggests a relation between the epileptiform activity and language impairment. All together, these findings provide additional insight into the neurophysiological mechanisms of epilepitogenisis and cognitive dysfunction associated with BECTS.

  9. iRaster: a novel information visualization tool to explore spatiotemporal patterns in multiple spike trains.

    Science.gov (United States)

    Somerville, J; Stuart, L; Sernagor, E; Borisyuk, R

    2010-12-15

    Over the last few years, simultaneous recordings of multiple spike trains have become widely used by neuroscientists. Therefore, it is important to develop new tools for analysing multiple spike trains in order to gain new insight into the function of neural systems. This paper describes how techniques from the field of visual analytics can be used to reveal specific patterns of neural activity. An interactive raster plot called iRaster has been developed. This software incorporates a selection of statistical procedures for visualization and flexible manipulations with multiple spike trains. For example, there are several procedures for the re-ordering of spike trains which can be used to unmask activity propagation, spiking synchronization, and many other important features of multiple spike train activity. Additionally, iRaster includes a rate representation of neural activity, a combined representation of rate and spikes, spike train removal and time interval removal. Furthermore, it provides multiple coordinated views, time and spike train zooming windows, a fisheye lens distortion, and dissemination facilities. iRaster is a user friendly, interactive, flexible tool which supports a broad range of visual representations. This tool has been successfully used to analyse both synthetic and experimentally recorded datasets. In this paper, the main features of iRaster are described and its performance and effectiveness are demonstrated using various types of data including experimental multi-electrode array recordings from the ganglion cell layer in mouse retina. iRaster is part of an ongoing research project called VISA (Visualization of Inter-Spike Associations) at the Visualization Lab in the University of Plymouth. The overall aim of the VISA project is to provide neuroscientists with the ability to freely explore and analyse their data. The software is freely available from the Visualization Lab website (see www.plymouth.ac.uk/infovis). Copyright © 2010

  10. Glycan shield and fusion activation of a deltacoronavirus spike glycoprotein fine-tuned for enteric infections

    NARCIS (Netherlands)

    Xiong, Xiaoli; Tortorici, M Alejandra; Snijder, Joost|info:eu-repo/dai/nl/338018328; Yoshioka, Craig; Walls, Alexandra C; Li, Wentao|info:eu-repo/dai/nl/411296272; McGuire, Andrew T; Rey, Félix A; Bosch, Berend-Jan|info:eu-repo/dai/nl/273306049; Veesler, David

    2017-01-01

    Coronaviruses recently emerged as major human pathogens causing outbreaks of severe acute respiratory syndrome and Middle-East respiratory syndrome. They utilize the spike (S) glycoprotein anchored in the viral envelope to mediate host attachment and fusion of the viral and cellular membranes to

  11. Resting-state fMRI revealed different brain activities responding to valproic acid and levetiracetam in benign epilepsy with central-temporal spikes

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Qirui; Zhang, Zhiqiang; Xu, Qiang; Wu, Han; Li, Zhipeng; Lu, Guangming [Nanjing University School of Medicine, Department of Medical Imaging, Jinling Hospital, Nanjing (China); Yang, Fang; Li, Qian [Nanjing University School of Medicine, Department of Neurology, Jinling Hospital, Nanjing (China); Hu, Zheng [Nanjing Children' s Hospital, Department of Neurology, Nanjing (China); Dante, Mantini [Faculty of Kinesiology and Rehabilitation Sciences, KU Leuven (Belgium); Li, Kai [Suzhou University, Laboratory of Molecular Medicine, Suzhou (China)

    2017-05-15

    Our aim was to investigate regional difference in brain activities in response to antiepileptic drug (AED) medications in benign epilepsy with central-temporal spikes (BECTS) using resting-state functional magnetic resonance imaging (fMRI). Fifty-seven patients with BECTS underwent resting-state fMRI scans after receiving either valproic acid (VPA) (n = 15), levetiracetam (LEV) (n = 21), or no medication (n = 21). fMRI regional homogeneity (ReHo) parameter among the three groups of patients were compared and were correlated with total doses of AED in the two medicated groups. Compared with patients on no-medication, patients receiving either VPA or LEV showed decreased ReHo in the central-temporal region, frontal cortex, and thalamus. In particular, the VPA group showed greater ReHo decrease in the thalamus and milder in cortices and caudate heads compared with the LEV group. In addition, the VPA group demonstrated a negative correlation between ReHo values in the central-temporal region and medication dose. Both VPA and LEV inhibit resting-state neural activity in the central-temporal region, which is the main epileptogenic focus of BECTS. VPA reduced brain activity in the cortical epileptogenic regions and thalamus evenly, whereas LEV reduced brain activity predominantly in the cortices. Interestingly, VPA showed a cumulative effect on inhibiting brain activity in the epileptogenic regions in BECTS. (orig.)

  12. Active voltammetric microsensors with neural signal processing.

    Energy Technology Data Exchange (ETDEWEB)

    Vogt, M. C.

    1998-12-11

    Many industrial and environmental processes, including bioremediation, would benefit from the feedback and control information provided by a local multi-analyte chemical sensor. For most processes, such a sensor would need to be rugged enough to be placed in situ for long-term remote monitoring, and inexpensive enough to be fielded in useful numbers. The multi-analyte capability is difficult to obtain from common passive sensors, but can be provided by an active device that produces a spectrum-type response. Such new active gas microsensor technology has been developed at Argonne National Laboratory. The technology couples an electrocatalytic ceramic-metallic (cermet) microsensor with a voltammetric measurement technique and advanced neural signal processing. It has been demonstrated to be flexible, rugged, and very economical to produce and deploy. Both narrow interest detectors and wide spectrum instruments have been developed around this technology. Much of this technology's strength lies in the active measurement technique employed. The technique involves applying voltammetry to a miniature electrocatalytic cell to produce unique chemical ''signatures'' from the analytes. These signatures are processed with neural pattern recognition algorithms to identify and quantify the components in the analyte. The neural signal processing allows for innovative sampling and analysis strategies to be employed with the microsensor. In most situations, the whole response signature from the voltammogram can be used to identify, classify, and quantify an analyte, without dissecting it into component parts. This allows an instrument to be calibrated once for a specific gas or mixture of gases by simple exposure to a multi-component standard rather than by a series of individual gases. The sampled unknown analytes can vary in composition or in concentration, the calibration, sensing, and processing methods of these active voltammetric microsensors can

  13. Unbiased and robust quantification of synchronization between spikes and local field potential.

    Science.gov (United States)

    Li, Zhaohui; Cui, Dong; Li, Xiaoli

    2016-08-30

    In neuroscience, relating the spiking activity of individual neurons to the local field potential (LFP) of neural ensembles is an increasingly useful approach for studying rhythmic neuronal synchronization. Many methods have been proposed to measure the strength of the association between spikes and rhythms in the LFP recordings, and most existing measures are dependent upon the total number of spikes. In the present work, we introduce a robust approach for quantifying spike-LFP synchronization which performs reliably for limited samples of data. The measure is termed as spike-triggered correlation matrix synchronization (SCMS), which takes LFP segments centered on each spike as multi-channel signals and calculates the index of spike-LFP synchronization by constructing a correlation matrix. The simulation based on artificial data shows that the SCMS output almost does not change with the sample size. This property is of crucial importance when making comparisons between different experimental conditions. When applied to actual neuronal data recorded from the monkey primary visual cortex, it is found that the spike-LFP synchronization strength shows orientation selectivity to drifting gratings. In comparison to another unbiased method, pairwise phase consistency (PPC), the proposed SCMS behaves better for noisy spike trains by means of numerical simulations. This study demonstrates the basic idea and calculating process of the SCMS method. Considering its unbiasedness and robustness, the measure is of great advantage to characterize the synchronization between spike trains and rhythms present in LFP. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Estimating the correlation between bursty spike trains and local field potentials.

    Science.gov (United States)

    Li, Zhaohui; Ouyang, Gaoxiang; Yao, Li; Li, Xiaoli

    2014-09-01

    To further understand rhythmic neuronal synchronization, an increasingly useful method is to determine the relationship between the spiking activity of individual neurons and the local field potentials (LFPs) of neural ensembles. Spike field coherence (SFC) is a widely used method for measuring the synchronization between spike trains and LFPs. However, due to the strong dependency of SFC on the burst index, it is not suitable for analyzing the relationship between bursty spike trains and LFPs, particularly in high frequency bands. To address this issue, we developed a method called weighted spike field correlation (WSFC), which uses the first spike in each burst multiple times to estimate the relationship. In the calculation, the number of times that the first spike is used is equal to the spike count per burst. The performance of this method was demonstrated using simulated bursty spike trains and LFPs, which comprised sinusoids with different frequencies, amplitudes, and phases. This method was also used to estimate the correlation between pyramidal cells in the hippocampus and gamma oscillations in rats performing behaviors. Analyses using simulated and real data demonstrated that the WSFC method is a promising measure for estimating the correlation between bursty spike trains and high frequency LFPs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Olfactory learning without the mushroom bodies: Spiking neural network models of the honeybee lateral antennal lobe tract reveal its capacities in odour memory tasks of varied complexities.

    Science.gov (United States)

    MaBouDi, HaDi; Shimazaki, Hideaki; Giurfa, Martin; Chittka, Lars

    2017-06-01

    The honeybee olfactory system is a well-established model for understanding functional mechanisms of learning and memory. Olfactory stimuli are first processed in the antennal lobe, and then transferred to the mushroom body and lateral horn through dual pathways termed medial and lateral antennal lobe tracts (m-ALT and l-ALT). Recent studies reported that honeybees can perform elemental learning by associating an odour with a reward signal even after lesions in m-ALT or blocking the mushroom bodies. To test the hypothesis that the lateral pathway (l-ALT) is sufficient for elemental learning, we modelled local computation within glomeruli in antennal lobes with axons of projection neurons connecting to a decision neuron (LHN) in the lateral horn. We show that inhibitory spike-timing dependent plasticity (modelling non-associative plasticity by exposure to different stimuli) in the synapses from local neurons to projection neurons decorrelates the projection neurons' outputs. The strength of the decorrelations is regulated by global inhibitory feedback within antennal lobes to the projection neurons. By additionally modelling octopaminergic modification of synaptic plasticity among local neurons in the antennal lobes and projection neurons to LHN connections, the model can discriminate and generalize olfactory stimuli. Although positive patterning can be accounted for by the l-ALT model, negative patterning requires further processing and mushroom body circuits. Thus, our model explains several-but not all-types of associative olfactory learning and generalization by a few neural layers of odour processing in the l-ALT. As an outcome of the combination between non-associative and associative learning, the modelling approach allows us to link changes in structural organization of honeybees' antennal lobes with their behavioural performances over the course of their life.

  16. Identifying Emotions on the Basis of Neural Activation.

    Directory of Open Access Journals (Sweden)

    Karim S Kassam

    Full Text Available We attempt to determine the discriminability and organization of neural activation corresponding to the experience of specific emotions. Method actors were asked to self-induce nine emotional states (anger, disgust, envy, fear, happiness, lust, pride, sadness, and shame while in an fMRI scanner. Using a Gaussian Naïve Bayes pooled variance classifier, we demonstrate the ability to identify specific emotions experienced by an individual at well over chance accuracy on the basis of: 1 neural activation of the same individual in other trials, 2 neural activation of other individuals who experienced similar trials, and 3 neural activation of the same individual to a qualitatively different type of emotion induction. Factor analysis identified valence, arousal, sociality, and lust as dimensions underlying the activation patterns. These results suggest a structure for neural representations of emotion and inform theories of emotional processing.

  17. Unsupervised clustering with spiking neurons by sparse temporal coding and multi-layer RBF networks

    NARCIS (Netherlands)

    S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)

    2000-01-01

    textabstractWe demonstrate that spiking neural networks encoding information in spike times are capable of computing and learning clusters from realistic data. We show how a spiking neural network based on spike-time coding and Hebbian learning can successfully perform unsupervised clustering on

  18. Stress-induced impairment of a working memory task: role of spiking rate and spiking history predicted discharge.

    Science.gov (United States)

    Devilbiss, David M; Jenison, Rick L; Berridge, Craig W

    2012-01-01

    Stress, pervasive in society, contributes to over half of all work place accidents a year and over time can contribute to a variety of psychiatric disorders including depression, schizophrenia, and post-traumatic stress disorder. Stress impairs higher cognitive processes, dependent on the prefrontal cortex (PFC) and that involve maintenance and integration of information over extended periods, including working memory and attention. Substantial evidence has demonstrated a relationship between patterns of PFC neuron spiking activity (action-potential discharge) and components of delayed-response tasks used to probe PFC-dependent cognitive function in rats and monkeys. During delay periods of these tasks, persistent spiking activity is posited to be essential for the maintenance of information for working memory and attention. However, the degree to which stress-induced impairment in PFC-dependent cognition involves changes in task-related spiking rates or the ability for PFC neurons to retain information over time remains unknown. In the current study, spiking activity was recorded from the medial PFC of rats performing a delayed-response task of working memory during acute noise stress (93 db). Spike history-predicted discharge (SHPD) for PFC neurons was quantified as a measure of the degree to which ongoing neuronal discharge can be predicted by past spiking activity and reflects the degree to which past information is retained by these neurons over time. We found that PFC neuron discharge is predicted by their past spiking patterns for nearly one second. Acute stress impaired SHPD, selectively during delay intervals of the task, and simultaneously impaired task performance. Despite the reduction in delay-related SHPD, stress increased delay-related spiking rates. These findings suggest that neural codes utilizing SHPD within PFC networks likely reflects an additional important neurophysiological mechanism for maintenance of past information over time. Stress

  19. Neuronal communication: firing spikes with spikes.

    Science.gov (United States)

    Brecht, Michael

    2012-08-21

    Spikes of single cortical neurons can exert powerful effects even though most cortical synapses are too weak to fire postsynaptic neurons. A recent study combining single-cell stimulation with population imaging has visualized in vivo postsynaptic firing in genetically identified target cells. The results confirm predictions from in vitro work and might help to understand how the brain reads single-neuron activity. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Computing with Spiking Neuron Networks

    NARCIS (Netherlands)

    H. Paugam-Moisy; S.M. Bohte (Sander); G. Rozenberg; T.H.W. Baeck (Thomas); J.N. Kok (Joost)

    2012-01-01

    htmlabstractAbstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions

  1. Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems

    Directory of Open Access Journals (Sweden)

    Emre eNeftci

    2014-01-01

    Full Text Available Restricted Boltzmann Machines (RBMs and Deep Belief Networks have been demonstrated to perform efficiently in variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The reverberating activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP carries out the weight updates in an online, asynchronous fashion.We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

  2. Mdm2 mediates FMRP- and Gp1 mGluR-dependent protein translation and neural network activity.

    Science.gov (United States)

    Liu, Dai-Chi; Seimetz, Joseph; Lee, Kwan Young; Kalsotra, Auinash; Chung, Hee Jung; Lu, Hua; Tsai, Nien-Pei

    2017-10-15

    Activating Group 1 (Gp1) metabotropic glutamate receptors (mGluRs), including mGluR1 and mGluR5, elicits translation-dependent neural plasticity mechanisms that are crucial to animal behavior and circuit development. Dysregulated Gp1 mGluR signaling has been observed in numerous neurological and psychiatric disorders. However, the molecular pathways underlying Gp1 mGluR-dependent plasticity mechanisms are complex and have been elusive. In this study, we identified a novel mechanism through which Gp1 mGluR mediates protein translation and neural plasticity. Using a multi-electrode array (MEA) recording system, we showed that activating Gp1 mGluR elevates neural network activity, as demonstrated by increased spontaneous spike frequency and burst activity. Importantly, we validated that elevating neural network activity requires protein translation and is dependent on fragile X mental retardation protein (FMRP), the protein that is deficient in the most common inherited form of mental retardation and autism, fragile X syndrome (FXS). In an effort to determine the mechanism by which FMRP mediates protein translation and neural network activity, we demonstrated that a ubiquitin E3 ligase, murine double minute-2 (Mdm2), is required for Gp1 mGluR-induced translation and neural network activity. Our data showed that Mdm2 acts as a translation suppressor, and FMRP is required for its ubiquitination and down-regulation upon Gp1 mGluR activation. These data revealed a novel mechanism by which Gp1 mGluR and FMRP mediate protein translation and neural network activity, potentially through de-repressing Mdm2. Our results also introduce an alternative way for understanding altered protein translation and brain circuit excitability associated with Gp1 mGluR in neurological diseases such as FXS. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Multiple faces elicit augmented neural activity

    Directory of Open Access Journals (Sweden)

    Aina ePuce

    2013-06-01

    Full Text Available How do our brains respond when we are being watched by a group of people? Despite the large volume of literature devoted to face processing, this question has received very little attention. Here we measured the effects on the face-sensitive N170 and other ERPs to viewing displays of one, two and three faces in two experiments. In Experiment 1, overall image brightness and contrast were adjusted to be constant, whereas in Experiment 2 local contrast and brightness of individual faces were not manipulated. A robust positive-negative-positive (P100-N170-P250 ERP complex and an additional late positive ERP, the P400, were elicited to all stimulus types. As the number of faces in the display increased, N170 amplitude increased for both stimulus sets, and latency increased in Experiment 2. P100 latency and P250 amplitude were affected by changes in overall brightness and contrast, but not by the number of faces in the display per se. In Experiment 1 when overall brightness and contrast were adjusted to be constant, later ERP (P250 and P400 latencies showed differences as a function of hemisphere. Hence, our data indicate that N170 increases its magnitude when multiple faces are seen, apparently impervious to basic low-level stimulus features including stimulus size. Outstanding questions remain regarding category-sensitive neural activity that is elicited to viewing multiple items of stimulus categories other than faces.

  4. The Effects of GABAergic Polarity Changes on Episodic Neural Network Activity in Developing Neural Systems

    Directory of Open Access Journals (Sweden)

    Wilfredo Blanco

    2017-09-01

    Full Text Available Early in development, neural systems have primarily excitatory coupling, where even GABAergic synapses are excitatory. Many of these systems exhibit spontaneous episodes of activity that have been characterized through both experimental and computational studies. As development progress the neural system goes through many changes, including synaptic remodeling, intrinsic plasticity in the ion channel expression, and a transformation of GABAergic synapses from excitatory to inhibitory. What effect each of these, and other, changes have on the network behavior is hard to know from experimental studies since they all happen in parallel. One advantage of a computational approach is that one has the ability to study developmental changes in isolation. Here, we examine the effects of GABAergic synapse polarity change on the spontaneous activity of both a mean field and a neural network model that has both glutamatergic and GABAergic coupling, representative of a developing neural network. We find some intuitive behavioral changes as the GABAergic neurons go from excitatory to inhibitory, shared by both models, such as a decrease in the duration of episodes. We also find some paradoxical changes in the activity that are only present in the neural network model. In particular, we find that during early development the inter-episode durations become longer on average, while later in development they become shorter. In addressing this unexpected finding, we uncover a priming effect that is particularly important for a small subset of neurons, called the “intermediate neurons.” We characterize these neurons and demonstrate why they are crucial to episode initiation, and why the paradoxical behavioral change result from priming of these neurons. The study illustrates how even arguably the simplest of developmental changes that occurs in neural systems can present non-intuitive behaviors. It also makes predictions about neural network behavioral changes

  5. Activity-dependent neural plasticity from bench to bedside.

    Science.gov (United States)

    Ganguly, Karunesh; Poo, Mu-Ming

    2013-10-30

    Much progress has been made in understanding how behavioral experience and neural activity can modify the structure and function of neural circuits during development and in the adult brain. Studies of physiological and molecular mechanisms underlying activity-dependent plasticity in animal models have suggested potential therapeutic approaches for a wide range of brain disorders in humans. Physiological and electrical stimulations as well as plasticity-modifying molecular agents may facilitate functional recovery by selectively enhancing existing neural circuits or promoting the formation of new functional circuits. Here, we review the advances in basic studies of neural plasticity mechanisms in developing and adult nervous systems and current clinical treatments that harness neural plasticity, and we offer perspectives on future development of plasticity-based therapy. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Interactions between procedural learning and cocaine exposure alter spontaneous and cortically-evoked spike activity in the dorsal striatum

    Directory of Open Access Journals (Sweden)

    Janie eOndracek

    2010-12-01

    Full Text Available We have previously shown that cocaine enhances gene regulation in the sensorimotor striatum associated with procedural learning in a running-wheel paradigm. Here we assessed whether cocaine produces enduring modifications of learning-related changes in striatal neuron activity, using single-unit recordings in anesthetized rats 1 day after the wheel training. Spontaneous and cortically-evoked spike activity was compared between groups treated with cocaine or vehicle immediately prior to the running-wheel training or placement in a locked wheel (control conditions. We found that wheel training in vehicle-treated rats increased the average firing rate of spontaneously active neurons without changing the relative proportion of active to quiescent cells. In contrast, in rats trained under the influence of cocaine, the proportion of spontaneously firing to quiescent cells was significantly greater than in vehicle-treated, trained rats. However, this effect was associated with a lower average firing rate in these spontaneously active cells, suggesting that training under the influence of cocaine recruited additional low-firing cells. Measures of cortically-evoked activity revealed a second interaction between cocaine treatment and wheel training, namely, a cocaine-induced decrease in spike onset latency in control rats (locked wheel. This facilitatory effect of cocaine was abolished when rats trained in the running wheel during cocaine action. These findings highlight important interactions between cocaine and procedural learning, which act to modify population firing activity and the responsiveness of striatal neurons to excitatory inputs. Moreover, these effects were found 24 hours after the training and last drug exposure indicating that cocaine exposure during the learning phase triggers long-lasting changes in synaptic plasticity in the dorsal striatum. Such changes may contribute to the transition from recreational to habitual or compulsive drug

  7. Embedding responses in spontaneous neural activity shaped through sequential learning.

    Science.gov (United States)

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2013-01-01

    Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, "memories-as-bifurcations," that differs from the traditional "memories-as-attractors" viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in our previous study.

  8. Embedding responses in spontaneous neural activity shaped through sequential learning.

    Directory of Open Access Journals (Sweden)

    Tomoki Kurikawa

    Full Text Available Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, "memories-as-bifurcations," that differs from the traditional "memories-as-attractors" viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in

  9. PySpike - A Python library for analyzing spike train synchrony

    CERN Document Server

    Mulansky, Mario

    2016-01-01

    Understanding how the brain functions is one of the biggest challenges of our time. The analysis of experimentally recorded neural firing patterns (spike trains) plays a crucial role in addressing this problem. Here, the PySpike library is introduced, a Python package for spike train analysis providing parameter-free and time-scale independent measures of spike train synchrony. It allows to compute bi- and multivariate dissimilarity profiles, averaged values and bivariate matrices. Although mainly focusing on neuroscience, PySpike can also be applied in other contexts like climate research or social sciences. The package is available as Open Source on Github and PyPI.

  10. On the asynchronously continuous control of mobile robot movement by motor cortical spiking activity.

    Science.gov (United States)

    Xu, Zhiming; So, Rosa Q; Toe, Kyaw Kyar; Ang, Kai Keng; Guan, Cuntai

    2014-01-01

    This paper presents an asynchronously intracortical brain-computer interface (BCI) which allows the subject to continuously drive a mobile robot. This system has a great implication for disabled patients to move around. By carefully designing a multiclass support vector machine (SVM), the subject's self-paced instantaneous movement intents are continuously decoded to control the mobile robot. In particular, we studied the stability of the neural representation of the movement directions. Experimental results on the nonhuman primate showed that the overt movement directions were stably represented in ensemble of recorded units, and our SVM classifier could successfully decode such movements continuously along the desired movement path. However, the neural representation of the stop state for the self-paced control was not stably represented and could drift.

  11. Mapping spikes to sensations

    Directory of Open Access Journals (Sweden)

    Maik Christopher Stüttgen

    2011-11-01

    Full Text Available Single-unit recordings conducted during perceptual decision-making tasks have yielded tremendous insights into the neural coding of sensory stimuli. In such experiments, detection or discrimination behavior (the psychometric data is observed in parallel with spike trains in sensory neurons (the neurometric data. Frequently, candidate neural codes for information read-out are pitted against each other by transforming the neurometric data in some way and asking which code’s performance most closely approximates the psychometric performance. The code that matches the psychometric performance best is retained as a viable candidate and the others are rejected. In following this strategy, psychometric data is often considered to provide an unbiased measure of perceptual sensitivity. It is rarely acknowledged that psychometric data result from a complex interplay of sensory and non-sensory processes and that neglect of these processes may result in misestimating psychophysical sensitivity. This again may lead to erroneous conclusions regarding the adequacy of neural candidate codes. In this review, we first discuss requirements on the neural data for a subsequent neurometric-psychometric comparison. We then focus on different psychophysical tasks for the assessment of detection and discrimination performance and the cognitive processes that may underlie their execution. We discuss further factors that may compromise psychometric performance and how they can be detected or avoided. We believe that these considerations point to shortcomings in our understanding of the processes underlying perceptual decisions, and therefore offer potential for future research.

  12. Neural Activity Reveals Preferences Without Choices

    Science.gov (United States)

    Smith, Alec; Bernheim, B. Douglas; Camerer, Colin

    2014-01-01

    We investigate the feasibility of inferring the choices people would make (if given the opportunity) based on their neural responses to the pertinent prospects when they are not engaged in actual decision making. The ability to make such inferences is of potential value when choice data are unavailable, or limited in ways that render standard methods of estimating choice mappings problematic. We formulate prediction models relating choices to “non-choice” neural responses and use them to predict out-of-sample choices for new items and for new groups of individuals. The predictions are sufficiently accurate to establish the feasibility of our approach. PMID:25729468

  13. A reanalysis of “Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons” [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Rainer Engelken

    2016-08-01

    Full Text Available Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  14. Activity-dependent plasticity in the isolated embryonic avian brainstem following manipulations of rhythmic spontaneous neural activity.

    Science.gov (United States)

    Vincen-Brown, Michael A; Revill, Ann L; Pilarski, Jason Q

    2016-07-15

    When rhythmic spontaneous neural activity (rSNA) first appears in the embryonic chick brainstem and cranial nerve motor axons it is principally driven by nicotinic neurotransmission (NT). At this early age, the nicotinic acetylcholine receptor (nAChR) agonist nicotine is known to critically disrupt rSNA at low concentrations (0.1-0.5μM), which are levels that mimic the blood plasma levels of a fetus following maternal cigarette smoking. Thus, we quantified the effect of persistent exposure to exogenous nicotine on rSNA using an in vitro developmental model. We found that rSNA was eliminated by continuous bath application of exogenous nicotine, but rSNA recovered activity within 6-12h despite the persistent activation and desensitization of nAChRs. During the recovery period rSNA was critically driven by chloride-mediated membrane depolarization instead of nicotinic NT. To test whether this observed compensation was unique to the antagonism of nicotinic NT or whether the loss of spiking behavior also played a role, we eliminated rSNA by lowering overall excitatory drive with a low [K(+)]o superfusate. In this context, rSNA again recovered, although the recovery time was much quicker, and exhibited a lower frequency, higher duration, and an increase in the number of bursts per episode when compared to control embryos. Importantly, we show that the main compensatory response to lower overall excitatory drive, similar to nicotinergic block, is a result of potentiated chloride mediated membrane depolarization. These results support increasing evidence that early neural circuits sense spiking behavior to maintain primordial bioelectric rhythms. Understanding the nature of developmental plasticity in the nervous system, especially versions that preserve rhythmic behaviors following clinically meaningful environmental stimuli, both normal and pathological, will require similar studies to determine the consequences of feedback compensation at more mature chronological ages

  15. Optical imaging of neuronal activity and visualization of fine neural structures in non-desheathed nervous systems.

    Directory of Open Access Journals (Sweden)

    Christopher John Goldsmith

    Full Text Available Locating circuit neurons and recording from them with single-cell resolution is a prerequisite for studying neural circuits. Determining neuron location can be challenging even in small nervous systems because neurons are densely packed, found in different layers, and are often covered by ganglion and nerve sheaths that impede access for recording electrodes and neuronal markers. We revisited the voltage-sensitive dye RH795 for its ability to stain and record neurons through the ganglion sheath. Bath-application of RH795 stained neuronal membranes in cricket, earthworm and crab ganglia without removing the ganglion sheath, revealing neuron cell body locations in different ganglion layers. Using the pyloric and gastric mill central pattern generating neurons in the stomatogastric ganglion (STG of the crab, Cancer borealis, we found that RH795 permeated the ganglion without major residue in the sheath and brightly stained somatic, axonal and dendritic membranes. Visibility improved significantly in comparison to unstained ganglia, allowing the identification of somata location and number of most STG neurons. RH795 also stained axons and varicosities in non-desheathed nerves, and it revealed the location of sensory cell bodies in peripheral nerves. Importantly, the spike activity of the sensory neuron AGR, which influences the STG motor patterns, remained unaffected by RH795, while desheathing caused significant changes in AGR activity. With respect to recording neural activity, RH795 allowed us to optically record membrane potential changes of sub-sheath neuronal membranes without impairing sensory activity. The signal-to-noise ratio was comparable with that previously observed in desheathed preparations and sufficiently high to identify neurons in single-sweep recordings and synaptic events after spike-triggered averaging. In conclusion, RH795 enabled staining and optical recording of neurons through the ganglion sheath and is therefore both a

  16. The Right Delay: Detecting Specific Spike Patterns with STDP and

    NARCIS (Netherlands)

    Datadien, A.H.R.; Haselager, W.F.G.; Sprinkhuizen-Kuyper, I.G.; Dobnikar, A.; Lotric, U.; Ster, B.

    2011-01-01

    Axonal conduction delays should not be ignored in simulations of spiking neural networks. Here it is shown that by using axonal conduction delays, neurons can display sensitivity to a specific spatio-temporal spike pattern. By using delays that complement the firing times in a pattern, spikes can

  17. A generative spike train model with time-structured higher order correlations

    Directory of Open Access Journals (Sweden)

    James eTrousdale

    2013-07-01

    Full Text Available Emerging technologies are revealing the spiking activity in ever larger neural ensembles. Frequently, this spiking is far from independent, with correlations in the spike times of different cells. Understanding how such correlations impact the dynamics and function of neural ensembles remains an important open problem.Here we describe a new, generative model for correlated spike trains that can exhibit many of the features observed in data. Extending prior work in mathematical finance, this generalized thinning and shift (GTaS model creates marginally Poisson spike trains with diverse temporal correlation structures.We give several examples which highlight the model's flexibility and utility. For instance, we use it to examine how a neural network responds to highly structured patterns of inputs.We then show that the GTaS model is analytically tractable, and derive cumulant densities of all orders in terms of model parameters. The GTaS framework can therefore be an important tool in the experimental and theoretical exploration of neural dynamics.

  18. Active Engine Mounting Control Algorithm Using Neural Network

    Directory of Open Access Journals (Sweden)

    Fadly Jashi Darsivan

    2009-01-01

    Full Text Available This paper proposes the application of neural network as a controller to isolate engine vibration in an active engine mounting system. It has been shown that the NARMA-L2 neurocontroller has the ability to reject disturbances from a plant. The disturbance is assumed to be both impulse and sinusoidal disturbances that are induced by the engine. The performance of the neural network controller is compared with conventional PD and PID controllers tuned using Ziegler-Nichols. From the result simulated the neural network controller has shown better ability to isolate the engine vibration than the conventional controllers.

  19. Patterns recognition of electric brain activity using artificial neural networks

    Science.gov (United States)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  20. Test statistics for the identification of assembly neurons in parallel spike trains.

    Science.gov (United States)

    Picado Muiño, David; Borgelt, Christian

    2015-01-01

    In recent years numerous improvements have been made in multiple-electrode recordings (i.e., parallel spike-train recordings) and spike sorting to the extent that nowadays it is possible to monitor the activity of up to hundreds of neurons simultaneously. Due to these improvements it is now potentially possible to identify assembly activity (roughly understood as significant synchronous spiking of a group of neurons) from these recordings, which-if it can be demonstrated reliably-would significantly improve our understanding of neural activity and neural coding. However, several methodological problems remain when trying to do so and, among them, a principal one is the combinatorial explosion that one faces when considering all potential neuronal assemblies, since in principle every subset of the recorded neurons constitutes a candidate set for an assembly. We present several statistical tests to identify assembly neurons (i.e., neurons that participate in a neuronal assembly) from parallel spike trains with the aim of reducing the set of neurons to a relevant subset of them and this way ease the task of identifying neuronal assemblies in further analyses. These tests are an improvement of those introduced in the work by Berger et al. (2010) based on additional features like spike weight or pairwise overlap and on alternative ways to identify spike coincidences (e.g., by avoiding time binning, which tends to lose information).

  1. Test Statistics for the Identification of Assembly Neurons in Parallel Spike Trains

    Directory of Open Access Journals (Sweden)

    David Picado Muiño

    2015-01-01

    Full Text Available In recent years numerous improvements have been made in multiple-electrode recordings (i.e., parallel spike-train recordings and spike sorting to the extent that nowadays it is possible to monitor the activity of up to hundreds of neurons simultaneously. Due to these improvements it is now potentially possible to identify assembly activity (roughly understood as significant synchronous spiking of a group of neurons from these recordings, which—if it can be demonstrated reliably—would significantly improve our understanding of neural activity and neural coding. However, several methodological problems remain when trying to do so and, among them, a principal one is the combinatorial explosion that one faces when considering all potential neuronal assemblies, since in principle every subset of the recorded neurons constitutes a candidate set for an assembly. We present several statistical tests to identify assembly neurons (i.e., neurons that participate in a neuronal assembly from parallel spike trains with the aim of reducing the set of neurons to a relevant subset of them and this way ease the task of identifying neuronal assemblies in further analyses. These tests are an improvement of those introduced in the work by Berger et al. (2010 based on additional features like spike weight or pairwise overlap and on alternative ways to identify spike coincidences (e.g., by avoiding time binning, which tends to lose information.

  2. High Accuracy Human Activity Monitoring using Neural network

    OpenAIRE

    Sharma, Annapurna; Lee, Young-Dong; Chung, Wan-Young

    2011-01-01

    This paper presents the designing of a neural network for the classification of Human activity. A Triaxial accelerometer sensor, housed in a chest worn sensor unit, has been used for capturing the acceleration of the movements associated. All the three axis acceleration data were collected at a base station PC via a CC2420 2.4GHz ISM band radio (zigbee wireless compliant), processed and classified using MATLAB. A neural network approach for classification was used with an eye on theoretical a...

  3. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Directory of Open Access Journals (Sweden)

    Laureline Logiaco

    2015-08-01

    Full Text Available The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  4. Age-Dependency of Location of Epileptic Foci in "Continuous Spike-and-Waves during Sleep": A Parallel to the Posterior-Anterior Trajectory of Slow Wave Activity.

    Science.gov (United States)

    Bölsterli Heinzle, Bigna Katrin; Bast, Thomas; Critelli, Hanne; Huber, Reto; Schmitt, Bernhard

    2017-02-01

    Epileptic encephalopathy with continuous spike-and-waves during sleep (CSWS) occurs during childhood and is characterized by an activation of spike wave complexes during slow wave sleep. The location of epileptic foci is variable, as is etiology. A relationship between the epileptic focus and age has been shown in various focal epilepsies following a posterior-anterior trajectory, and a link to brain maturation has been proposed. We hypothesize that in CSWS, maximal spike wave activity, corresponding to the epileptic focus, is related to age and shows a posterior-anterior evolution. In a retrospective cross-sectional study on CSWS (22 EEGs of 22 patients aged 3.1–13.5 years), the location of the epileptic focus is related to age and follows a posterior-anterior course. Younger patients are more likely to have posterior foci than older ones. We propose that the posterior-anterior trajectory of maximal spike waves in CSWS might reflect maturational changes of maximal expression of sleep slow waves, which follow a comparable course. Epileptic spike waves, that is, “hyper-synchronized slow waves” may occur at the place where the highest and therefore most synchronized slow waves meet brain tissue with an increased susceptibility to synchronization. Georg Thieme Verlag KG Stuttgart · New York.

  5. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates.

    Science.gov (United States)

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-07-20

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys.

  6. Proteolytic activation of the SARS-coronavirus spike protein: cutting enzymes at the cutting edge of antiviral research.

    Science.gov (United States)

    Simmons, Graham; Zmora, Pawel; Gierer, Stefanie; Heurich, Adeline; Pöhlmann, Stefan

    2013-12-01

    The severe acute respiratory syndrome (SARS) pandemic revealed that zoonotic transmission of animal coronaviruses (CoV) to humans poses a significant threat to public health and warrants surveillance and the development of countermeasures. The activity of host cell proteases, which cleave and activate the SARS-CoV spike (S) protein, is essential for viral infectivity and constitutes a target for intervention. However, the identities of the proteases involved have been unclear. Pioneer studies identified cathepsins and type II transmembrane serine proteases as cellular activators of SARS-CoV and demonstrated that several emerging viruses might exploit these enzymes to promote their spread. Here, we will review the proteolytic systems hijacked by SARS-CoV for S protein activation, we will discuss their contribution to viral spread in the host and we will outline antiviral strategies targeting these enzymes. This paper forms part of a series of invited articles in Antiviral Research on "From SARS to MERS: 10years of research on highly pathogenic human coronaviruses.'' Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Proteolytic activation of the SARS-coronavirus spike protein: Cutting enzymes at the cutting edge of antiviral research

    Science.gov (United States)

    Simmons, Graham; Zmora, Pawel; Gierer, Stefanie; Heurich, Adeline; Pöhlmann, Stefan

    2013-01-01

    The severe acute respiratory syndrome (SARS) pandemic revealed that zoonotic transmission of animal coronaviruses (CoV) to humans poses a significant threat to public health and warrants surveillance and the development of countermeasures. The activity of host cell proteases, which cleave and activate the SARS-CoV spike (S) protein, is essential for viral infectivity and constitutes a target for intervention. However, the identities of the proteases involved have been unclear. Pioneer studies identified cathepsins and type II transmembrane serine proteases as cellular activators of SARS-CoV and demonstrated that several emerging viruses might exploit these enzymes to promote their spread. Here, we will review the proteolytic systems hijacked by SARS-CoV for S protein activation, we will discuss their contribution to viral spread in the host and we will outline antiviral strategies targeting these enzymes. This paper forms part of a series of invited articles in Antiviral Research on “From SARS to MERS: 10 years of research on highly pathogenic human coronaviruses.” PMID:24121034

  8. Positive mood enhances reward-related neural activity.

    Science.gov (United States)

    Young, Christina B; Nusslock, Robin

    2016-06-01

    Although behavioral research has shown that positive mood leads to desired outcomes in nearly every major life domain, no studies have directly examined the effects of positive mood on the neural processes underlying reward-related affect and goal-directed behavior. To address this gap, participants in the present fMRI study experienced either a positive (n = 20) or neutral (n = 20) mood induction and subsequently completed a monetary incentive delay task that assessed reward and loss processing. Consistent with prediction, positive mood elevated activity specifically during reward anticipation in corticostriatal neural regions that have been implicated in reward processing and goal-directed behavior, including the nucleus accumbens, caudate, lateral orbitofrontal cortex and putamen, as well as related paralimbic regions, including the anterior insula and ventromedial prefrontal cortex. These effects were not observed during reward outcome, loss anticipation or loss outcome. Critically, this is the first study to report that positive mood enhances reward-related neural activity. Our findings have implications for uncovering the neural mechanisms by which positive mood enhances goal-directed behavior, understanding the malleability of reward-related neural activity, and developing targeted treatments for psychiatric disorders characterized by deficits in reward processing. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-01-01

    Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250

  10. Novel Spiking Neuron-Astrocyte Networks based on nonlinear transistor-like models of tripartite synapses.

    Science.gov (United States)

    Valenza, Gaetano; Tedesco, Luciano; Lanata, Antonio; De Rossi, Danilo; Scilingo, Enzo Pasquale

    2013-01-01

    In this paper a novel and efficient computational implementation of a Spiking Neuron-Astrocyte Network (SNAN) is reported. Neurons are modeled according to the Izhikevich formulation and the neuron-astrocyte interactions are intended as tripartite synapsis and modeled with the previously proposed nonlinear transistor-like model. Concerning the learning rules, the original spike-timing dependent plasticity is used for the neural part of the SNAN whereas an ad-hoc rule is proposed for the astrocyte part. SNAN performances are compared with a standard spiking neural network (SNN) and evaluated using the polychronization concept, i.e., number of co-existing groups that spontaneously generate patterns of polychronous activity. The astrocyte-neuron ratio is the biologically inspired value of 1.5. The proposed SNAN shows higher number of polychronous groups than SNN, remarkably achieved for the whole duration of simulation (24 hours).

  11. Understanding the Implications of Neural Population Activity on Behavior

    Science.gov (United States)

    Briguglio, John

    Learning how neural activity in the brain leads to the behavior we exhibit is one of the fundamental questions in Neuroscience. In this dissertation, several lines of work are presented to that use principles of neural coding to understand behavior. In one line of work, we formulate the efficient coding hypothesis in a non-traditional manner in order to test human perceptual sensitivity to complex visual textures. We find a striking agreement between how variable a particular texture signal is and how sensitive humans are to its presence. This reveals that the efficient coding hypothesis is still a guiding principle for neural organization beyond the sensory periphery, and that the nature of cortical constraints differs from the peripheral counterpart. In another line of work, we relate frequency discrimination acuity to neural responses from auditory cortex in mice. It has been previously observed that optogenetic manipulation of auditory cortex, in addition to changing neural responses, evokes changes in behavioral frequency discrimination. We are able to account for changes in frequency discrimination acuity on an individual basis by examining the Fisher information from the neural population with and without optogenetic manipulation. In the third line of work, we address the question of what a neural population should encode given that its inputs are responses from another group of neurons. Drawing inspiration from techniques in machine learning, we train Deep Belief Networks on fake retinal data and show the emergence of Garbor-like filters, reminiscent of responses in primary visual cortex. In the last line of work, we model the state of a cortical excitatory-inhibitory network during complex adaptive stimuli. Using a rate model with Wilson-Cowan dynamics, we demonstrate that simple non-linearities in the signal transferred from inhibitory to excitatory neurons can account for real neural recordings taken from auditory cortex. This work establishes and tests

  12. Proteolytic Activation of the Porcine Epidemic Diarrhea Coronavirus Spike Fusion Protein by Trypsin in Cell Culture.

    NARCIS (Netherlands)

    Wicht, Oliver|info:eu-repo/dai/nl/32291177X; Li, Wentao|info:eu-repo/dai/nl/411296272; Willems, Lione; Meuleman, Tom J; Wubbolts, Richard W|info:eu-repo/dai/nl/181688255; van Kuppeveld, Frank J M|info:eu-repo/dai/nl/156614723; Rottier, Peter J M|info:eu-repo/dai/nl/068451954; Bosch, Berend Jan|info:eu-repo/dai/nl/273306049

    2014-01-01

    Isolation of porcine epidemic diarrhea coronavirus (PEDV) from clinical material in cell culture requires supplementation of trypsin. This may relate to the confinement of PEDV natural infection to the protease-rich small intestine of pigs. Our study focused on the role of protease activity on

  13. Pharmacodynamics of remifentanil. Induced intracranial spike activity in mesial temporal lobe epilepsy

    DEFF Research Database (Denmark)

    Kjær, Troels Wesenberg; Hogenhaven, Hans; Lee, Andrea P

    2017-01-01

    activity in the temporal neocortex and hippocampus. We examined 65 patients with mesial temporal lobe epilepsy during surgery, prior to resection. We used a 20-lead grid on the cortex and a 4-lead strip in the lateral ventricle on the hippocampus. At least two 3-min periods of ECoG were recorded - before...

  14. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  15. Differential entrainment and learning-related dynamics of spike and local field potential activity in the sensorimotor and associative striatum.

    Science.gov (United States)

    Thorn, Catherine A; Graybiel, Ann M

    2014-02-19

    Parallel cortico-basal ganglia loops are thought to have distinct but interacting functions in motor learning and habit formation. In rats, the striatal projection neuron populations (MSNs) in the dorsolateral and dorsomedial striatum, respectively corresponding to sensorimotor and associative regions of the striatum, exhibit contrasting dynamics as rats acquire T-maze tasks (Thorn et al., 2010). Here, we asked whether these patterns could be related to the activity of local interneuron populations in the striatum and to the local field potential activity recorded simultaneously in the corresponding regions. We found that dorsolateral and dorsomedial striatal fast-spiking interneurons exhibited task-specific and training-related dynamics consistent with those of corresponding MSN populations. Moreover, both MSNs and interneuron populations in both regions became entrained to theta-band (5-12 Hz) frequencies during task acquisition. However, the predominant entrainment frequencies were different for the sensorimotor and associative zones. Dorsolateral striatal neurons became entrained mid-task to oscillations centered ∼ 5 Hz, whereas simultaneously recorded neurons in the dorsomedial region became entrained to higher frequency (∼ 10 Hz) rhythms. These region-specific patterns of entrainment evolved dynamically with the development of region-specific patterns of interneuron and MSN activity, indicating that, with learning, these two striatal regions can develop different frequency-modulated circuit activities in parallel. We suggest that such differential entrainment of sensorimotor and associative neuronal populations, acquired through learning, could be critical for coordinating information flow throughout each trans-striatal network while simultaneously enabling nearby components of the separate networks to operate independently.

  16. Parallel optical control of spatiotemporal neuronal spike activity using high-frequency digital light processingtechnology

    Directory of Open Access Journals (Sweden)

    Jason eJerome

    2011-08-01

    Full Text Available Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses Digital-Light-Processing (DLP technology to generate 2-dimensional (2D stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 µm and temporal (>13kHz resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 x 2.07 mm2 of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.

  17. Parallel optical control of spatiotemporal neuronal spike activity using high-speed digital light processing.

    Science.gov (United States)

    Jerome, Jason; Foehring, Robert C; Armstrong, William E; Spain, William J; Heck, Detlef H

    2011-01-01

    Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses digital light processing technology to generate 2-dimensional (2D) stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 μm) and temporal (>13 kHz) resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 mm × 2.07 mm) of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.

  18. Memristors Empower Spiking Neurons With Stochasticity

    KAUST Repository

    Al-Shedivat, Maruan

    2015-06-01

    Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning algorithms. However, such systems must have two crucial features: 1) the neurons should follow a specific behavioral model, and 2) stochastic spiking should be implemented efficiently for it to be scalable. This paper proposes a memristor-based stochastically spiking neuron that fulfills these requirements. First, the analytical model of the memristor is enhanced so it can capture the behavioral stochasticity consistent with experimentally observed phenomena. The switching behavior of the memristor model is demonstrated to be akin to the firing of the stochastic spike response neuron model, the primary building block for probabilistic algorithms in spiking neural networks. Furthermore, the paper proposes a neural soma circuit that utilizes the intrinsic nondeterminism of memristive switching for efficient spike generation. The simulations and analysis of the behavior of a single stochastic neuron and a winner-take-all network built of such neurons and trained on handwritten digits confirm that the circuit can be used for building probabilistic sampling and pattern adaptation machinery in spiking networks. The findings constitute an important step towards scalable and efficient probabilistic neuromorphic platforms. © 2011 IEEE.

  19. Locking of correlated neural activity to ongoing oscillations.

    Directory of Open Access Journals (Sweden)

    Tobias Kühn

    2017-06-01

    Full Text Available Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity. In these network states a global oscillatory cycle modulates the propensity of neurons to fire. Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain. A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate. Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations. While the dependence of correlations on the mean rate is well understood in feed-forward networks, it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle. We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks. We identify the mechanisms by which covariances depend on a driving periodic stimulus. Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks. Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons (via the external input and network feedback and the time-varying variances of single unit activities. For some parameters, the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior. Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis.

  20. Improved SpikeProp for Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Falah Y. H. Ahmed

    2013-01-01

    Full Text Available A spiking neurons network encodes information in the timing of individual spike times. A novel supervised learning rule for SpikeProp is derived to overcome the discontinuities introduced by the spiking thresholding. This algorithm is based on an error-backpropagation learning rule suited for supervised learning of spiking neurons that use exact spike time coding. The SpikeProp is able to demonstrate the spiking neurons that can perform complex nonlinear classification in fast temporal coding. This study proposes enhancements of SpikeProp learning algorithm for supervised training of spiking networks which can deal with complex patterns. The proposed methods include the SpikeProp particle swarm optimization (PSO and angle driven dependency learning rate. These methods are presented to SpikeProp network for multilayer learning enhancement and weights optimization. Input and output patterns are encoded as spike trains of precisely timed spikes, and the network learns to transform the input trains into target output trains. With these enhancements, our proposed methods outperformed other conventional neural network architectures.

  1. Neural activity predicts attitude change in cognitive dissonance.

    Science.gov (United States)

    van Veen, Vincent; Krug, Marie K; Schooler, Jonathan W; Carter, Cameron S

    2009-11-01

    When our actions conflict with our prior attitudes, we often change our attitudes to be more consistent with our actions. This phenomenon, known as cognitive dissonance, is considered to be one of the most influential theories in psychology. However, the neural basis of this phenomenon is unknown. Using a Solomon four-group design, we scanned participants with functional MRI while they argued that the uncomfortable scanner environment was nevertheless a pleasant experience. We found that cognitive dissonance engaged the dorsal anterior cingulate cortex and anterior insula; furthermore, we found that the activation of these regions tightly predicted participants' subsequent attitude change. These effects were not observed in a control group. Our findings elucidate the neural representation of cognitive dissonance, and support the role of the anterior cingulate cortex in detecting cognitive conflict and the neural prediction of attitude change.

  2. High solar activity predictions through an artificial neural network

    Science.gov (United States)

    Orozco-Del-Castillo, M. G.; Ortiz-Alemán, J. C.; Couder-Castañeda, C.; Hernández-Gómez, J. J.; Solís-Santomé, A.

    The effects of high-energy particles coming from the Sun on human health as well as in the integrity of outer space electronics make the prediction of periods of high solar activity (HSA) a task of significant importance. Since periodicities in solar indexes have been identified, long-term predictions can be achieved. In this paper, we present a method based on an artificial neural network to find a pattern in some harmonics which represent such periodicities. We used data from 1973 to 2010 to train the neural network, and different historical data for its validation. We also used the neural network along with a statistical analysis of its performance with known data to predict periods of HSA with different confidence intervals according to the three-sigma rule associated with solar cycles 24-26, which we found to occur before 2040.

  3. Death and rebirth of neural activity in sparse inhibitory networks

    Science.gov (United States)

    Angulo-Garcia, David; Luccioli, Stefano; Olmi, Simona; Torcini, Alessandro

    2017-05-01

    Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures, and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of neural activity, as expected, but can also promote neural re-activation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neuronal death). However, the random pruning of connections is able to reverse the action of inhibition, i.e. in a random sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of neurons (neuronal rebirth). Thus, the number of firing neurons reaches a minimum value at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by neurons with a higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving a mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, and the system passes from a perfectly regular evolution to irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.

  4. Neural activity when people solve verbal problems with insight.

    Directory of Open Access Journals (Sweden)

    Mark Jung-Beeman

    2004-04-01

    Full Text Available People sometimes solve problems with a unique process called insight, accompanied by an "Aha!" experience. It has long been unclear whether different cognitive and neural processes lead to insight versus noninsight solutions, or if solutions differ only in subsequent subjective feeling. Recent behavioral studies indicate distinct patterns of performance and suggest differential hemispheric involvement for insight and noninsight solutions. Subjects solved verbal problems, and after each correct solution indicated whether they solved with or without insight. We observed two objective neural correlates of insight. Functional magnetic resonance imaging (Experiment 1 revealed increased activity in the right hemisphere anterior superior temporal gyrus for insight relative to noninsight solutions. The same region was active during initial solving efforts. Scalp electroencephalogram recordings (Experiment 2 revealed a sudden burst of high-frequency (gamma-band neural activity in the same area beginning 0.3 s prior to insight solutions. This right anterior temporal area is associated with making connections across distantly related information during comprehension. Although all problem solving relies on a largely shared cortical network, the sudden flash of insight occurs when solvers engage distinct neural and cognitive processes that allow them to see connections that previously eluded them.

  5. Neural signal registration and analysis of axons grown in microchannels

    Science.gov (United States)

    Pigareva, Y.; Malishev, E.; Gladkov, A.; Kolpakov, V.; Bukatin, A.; Mukhina, I.; Kazantsev, V.; Pimashkin, A.

    2016-08-01

    Registration of neuronal bioelectrical signals remains one of the main physical tools to study fundamental mechanisms of signal processing in the brain. Neurons generate spiking patterns which propagate through complex map of neural network connectivity. Extracellular recording of isolated axons grown in microchannels provides amplification of the signal for detailed study of spike propagation. In this study we used neuronal hippocampal cultures grown in microfluidic devices combined with microelectrode arrays to investigate a changes of electrical activity during neural network development. We found that after 5 days in vitro after culture plating the spiking activity appears first in microchannels and on the next 2-3 days appears on the electrodes of overall neural network. We conclude that such approach provides a convenient method to study neural signal processing and functional structure development on a single cell and network level of the neuronal culture.

  6. Neural activations correlated with reading speed during reading novels.

    Science.gov (United States)

    Fujimaki, Norio; Munetsuna, Shinji; Sasaki, Toyofumi; Hayakawa, Tomoe; Ihara, Aya; Wei, Qiang; Terazono, Yasushi; Murata, Tsutomu

    2009-12-01

    Functional magnetic resonance imaging was used to measure neural activations in subjects instructed to silently read novels at ordinary and rapid speeds. Among the 19 subjects, 8 were experts in a rapid reading technique. Subjects pressed a button to turn pages during reading, and the interval between turning pages was recorded to evaluate the reading speed. For each subject, we evaluated activations in 14 areas and at 2 instructed reading speeds. Neural activations decreased with increasing reading speed in the left middle and posterior superior temporal area, left inferior frontal area, left precentral area, and the anterior temporal areas of both hemispheres, which have been reported to be active for linguistic processes, while neural activation increased with increasing reading speed in the right intraparietal sulcus, which is considered to reflect visuo-spatial processes. Despite the considerable reading speed differences, correlation analysis showed no significant difference in activation dependence on reading speed with respect to the subject groups and instructed reading speeds. The activation reduction with speed increase in language-related areas was opposite to the previous reports for low reading speeds. The present results suggest that subjects reduced linguistic processes with reading speed increase from ordinary to rapid speed.

  7. Operant conditioning of neural activity in freely behaving monkeys with intracranial reinforcement.

    Science.gov (United States)

    Eaton, Ryan W; Libey, Tyler; Fetz, Eberhard E

    2017-03-01

    Operant conditioning of neural activity has typically been performed under controlled behavioral conditions using food reinforcement. This has limited the duration and behavioral context for neural conditioning. To reward cell activity in unconstrained primates, we sought sites in nucleus accumbens (NAc) whose stimulation reinforced operant responding. In three monkeys, NAc stimulation sustained performance of a manual target-tracking task, with response rates that increased monotonically with increasing NAc stimulation. We recorded activity of single motor cortex neurons and documented their modulation with wrist force. We conditioned increased firing rates with the monkey seated in the training booth and during free behavior in the cage using an autonomous head-fixed recording and stimulating system. Spikes occurring above baseline rates triggered single or multiple electrical pulses to the reinforcement site. Such rate-contingent, unit-triggered stimulation was made available for periods of 1-3 min separated by 3-10 min time-out periods. Feedback was presented as event-triggered clicks both in-cage and in-booth, and visual cues were provided in many in-booth sessions. In-booth conditioning produced increases in single neuron firing probability with intracranial reinforcement in 48 of 58 cells. Reinforced cell activity could rise more than five times that of non-reinforced activity. In-cage conditioning produced significant increases in 21 of 33 sessions. In-cage rate changes peaked later and lasted longer than in-booth changes, but were often comparatively smaller, between 13 and 18% above non-reinforced activity. Thus intracranial stimulation reinforced volitional increases in cortical firing rates during both free behavior and a controlled environment, although changes in the latter were more robust.NEW & NOTEWORTHY Closed-loop brain-computer interfaces (BCI) were used to operantly condition increases in muscle and neural activity in monkeys by delivering

  8. Temporally coordinated spiking activity of human induced pluripotent stem cell-derived neurons co-cultured with astrocytes.

    Science.gov (United States)

    Kayama, Tasuku; Suzuki, Ikuro; Odawara, Aoi; Sasaki, Takuya; Ikegaya, Yuji

    2018-01-01

    In culture conditions, human induced-pluripotent stem cells (hiPSC)-derived neurons form synaptic connections with other cells and establish neuronal networks, which are expected to be an in vitro model system for drug discovery screening and toxicity testing. While early studies demonstrated effects of co-culture of hiPSC-derived neurons with astroglial cells on survival and maturation of hiPSC-derived neurons, the population spiking patterns of such hiPSC-derived neurons have not been fully characterized. In this study, we analyzed temporal spiking patterns of hiPSC-derived neurons recorded by a multi-electrode array system. We discovered that specific sets of hiPSC-derived neurons co-cultured with astrocytes showed more frequent and highly coherent non-random synchronized spike trains and more dynamic changes in overall spike patterns over time. These temporally coordinated spiking patterns are physiological signs of organized circuits of hiPSC-derived neurons and suggest benefits of co-culture of hiPSC-derived neurons with astrocytes. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. On the genesis of spike-wave oscillations in a mean-field model of human thalamic and corticothalamic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, Serafim [Department of Mathematical Sciences, Loughborough University, Leicestershire, LE11 3TU (United Kingdom); Terry, John R. [Department of Mathematical Sciences, Loughborough University, Leicestershire, LE11 3TU (United Kingdom)]. E-mail: j.r.terry@lboro.ac.uk; Breakspear, Michael [Black Dog Institute, Randwick, NSW 2031 (Australia); School of Psychiatry, UNSW, NSW 2030 (Australia)

    2006-07-10

    In this Letter, the genesis of spike-wave activity-a hallmark of many generalized epileptic seizures-is investigated in a reduced mean-field model of human neural activity. Drawing upon brain modelling and dynamical systems theory, we demonstrate that the thalamic circuitry of the system is crucial for the generation of these abnormal rhythms, observing that the combination of inhibition from reticular nuclei and excitation from the cortical signal, interplay to generate the spike-wave oscillation. The mechanism revealed provides an explanation of why approaches based on linear stability and Heaviside approximations to the activation function have failed to explain the phenomena of spike-wave behaviour in mean-field models. A mathematical understanding of this transition is a crucial step towards relating spiking network models and mean-field approaches to human brain modelling.

  10. Early interfaced neural activity from chronic amputated nerves

    Directory of Open Access Journals (Sweden)

    Kshitija Garde

    2009-05-01

    Full Text Available Direct interfacing of transected peripheral nerves with advanced robotic prosthetic devices has been proposed as a strategy for achieving natural motor control and sensory perception of such bionic substitutes, thus fully functionally replacing missing limbs in amputees. Multi-electrode arrays placed in the brain and peripheral nerves have been used successfully to convey neural control of prosthetic devices to the user. However, reactive gliosis, micro hemorrhages, axonopathy and excessive inflammation, currently limit their long-term use. Here we demonstrate that enticement of peripheral nerve regeneration through a non-obstructive multi-electrode array, after either acute or chronic nerve amputation, offers a viable alternative to obtain early neural recordings and to enhance long-term interfacing of nerve activity. Non restrictive electrode arrays placed in the path of regenerating nerve fibers allowed the recording of action potentials as early as 8 days post-implantation with high signal-to-noise ratio, as long as 3 months in some animals, and with minimal inflammation at the nerve tissue-metal electrode interface. Our findings suggest that regenerative on-dependent multi-electrode arrays of open design allow the early and stable interfacing of neural activity from amputated peripheral nerves and might contribute towards conveying full neural control and sensory feedback to users of robotic prosthetic devices. .

  11. Multiphoton minimal inertia scanning for fast acquisition of neural activity signals.

    Science.gov (United States)

    Schuck, Renaud; Go, Mary Ann; Garasto, Stefania; Reynolds, Stephanie; Dragotti, Pier Luigi; Schultz, Simon

    2017-11-13

    Multi-photon laser scanning microscopy provides a powerful tool for monitoring the spatiotemporal dynamics of neural circuit activity. It is, however, intrinsically a point scanning technique. Standard raster scanning enables imaging at subcellular resolution; however, acquisition rates are limited by the size of the field of view to be scanned. Recently developed scanning strategies such as Travelling Salesman Scanning (TSS) have been developed to maximize cellular sampling rate by scanning only select regions in the field of view corresponding to locations of interest such as somata. However, such strategies are not optimized for the mechanical properties of galvanometric scanners. We thus aimed to develop a new scanning algorithm which produces minimal inertia trajectories, and compare its performance with existing scanning algorithms. Approach: We describe here the Adaptive Spiral Scanning (SSA) algorithm, which fits a set of near-circular trajectories to the cellular distribution to avoid inertial drifts of galvanometer position. We compare its performance to raster scanning and TSS in terms of cellular sampling frequency and signal-to-noise ratio (SNR). Main Results: Using surrogate neuron spatial position data, we show that SSA acquisition rates are an order of magnitude higher than those for raster scanning and generally exceed those achieved by TSS for neural densities comparable with those found in the cortex. We show that this result also holds true for in vitro hippocampal mouse brain slices bath loaded with the synthetic calcium dye Cal-520 AM. The ability of TSS to "park" the laser on each neuron along the scanning trajectory, however, enables higher SNR than SSA when all targets are precisely scanned. Raster scanning has the highest SNR but at a substantial cost in number of cells scanned. To understand the impact of sampling rate and SNR on functional calcium imaging, we used the Crame ́r-Rao Bound on evoked calcium traces recorded

  12. Spike-contrast: A novel time scale independent and multivariate measure of spike train synchrony.

    Science.gov (United States)

    Ciba, Manuel; Isomura, Takuya; Jimbo, Yasuhiko; Bahmer, Andreas; Thielemann, Christiane

    2018-01-01

    Synchrony within neuronal networks is thought to be a fundamental feature of neuronal networks. In order to quantify synchrony between spike trains, various synchrony measures were developed. Most of them are time scale dependent and thus require the setting of an appropriate time scale. Recently, alternative methods have been developed, such as the time scale independent SPIKE-distance by Kreuz et al. In this study, a novel time-scale independent spike train synchrony measure called Spike-contrast is proposed. The algorithm is based on the temporal "contrast" (activity vs. non-activity in certain temporal bins) and not only provides a single synchrony value, but also a synchrony curve as a function of the bin size. For most test data sets synchrony values obtained with Spike-contrast are highly correlated with those of the SPIKE-distance (Spearman correlation value of 0.99). Correlation was lower for data containing multiple time scales (Spearman correlation value of 0.89). When analyzing large sets of data, Spike-contrast performed faster. Spike-contrast is compared to the SPIKE-distance algorithm. The test data consisted of artificial spike trains with various levels of synchrony, including Poisson spike trains and bursts, spike trains from simulated neuronal Izhikevich networks, and bursts made of smaller bursts (sub-bursts). The high correlation of Spike-contrast with the established SPIKE-distance for most test data, suggests the suitability of the proposed measure. Both measures are complementary as SPIKE-distance provides a synchrony profile over time, whereas Spike-contrast provides a synchrony curve over bin size. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  14. Spike Timing Matters in Novel Neuronal Code Involved in Vibrotactile Frequency Perception.

    Science.gov (United States)

    Birznieks, Ingvars; Vickery, Richard M

    2017-05-22

    Skin vibrations sensed by tactile receptors contribute significantly to the perception of object properties during tactile exploration [1-4] and to sensorimotor control during object manipulation [5]. Sustained low-frequency skin vibration (perception of frequency is still unknown. Measures based on mean spike rates of neurons in the primary somatosensory cortex are sufficient to explain performance in some frequency discrimination tasks [7-11]; however, there is emerging evidence that stimuli can be distinguished based also on temporal features of neural activity [12, 13]. Our study's advance is to demonstrate that temporal features are fundamental for vibrotactile frequency perception. Pulsatile mechanical stimuli were used to elicit specified temporal spike train patterns in tactile afferents, and subsequently psychophysical methods were employed to characterize human frequency perception. Remarkably, the most salient temporal feature determining vibrotactile frequency was not the underlying periodicity but, rather, the duration of the silent gap between successive bursts of neural activity. This burst gap code for frequency represents a previously unknown form of neural coding in the tactile sensory system, which parallels auditory pitch perception mechanisms based on purely temporal information where longer inter-pulse intervals receive higher perceptual weights than short intervals [14]. Our study also demonstrates that human perception of stimuli can be determined exclusively by temporal features of spike trains independent of the mean spike rate and without contribution from population response factors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Activity and population dynamics of heterotrophic and ammonia-oxidizing microorganisms in soil surrounding sludge bands spiked with linear alkylbenzene sulfonate

    DEFF Research Database (Denmark)

    Brandt, K. K.; Sørensen, J.; Krogh, P. H.

    2003-01-01

    in a sandy soil surrounding well-defined sludge bands spiked with high but realistic LAS levels (7.1 or 31.3 g/kg). Surprisingly, LAS had no effect on heterotrophic respiration in the sludge compartment per se but stimulated activity and metabolic quotient (microbial activity per unit of biomass......Recent research has documented soil microorganisms to be rather sensitive to linear alkylbenzene sulfonates (LAS), which may enter the soil environment in considerable quantities following sewage sludge disposal. We here report field effects of LAS on selected microbial populations present......) in the surrounding soil. By contrast, autotrophic ammonia oxidation was initially inhibited in the LAS-spiked sludge. This led to dramatic transient increases of NH+4 availability in the sludge and surrounding soil, subsequently stimulating soil ammonia oxidizers. As judged from a Nitrosomonas europaea...

  16. A stationary wavelet transform and a time-frequency based spike detection algorithm for extracellular recorded data

    Science.gov (United States)

    Lieb, Florian; Stark, Hans-Georg; Thielemann, Christiane

    2017-06-01

    Objective. Spike detection from extracellular recordings is a crucial preprocessing step when analyzing neuronal activity. The decision whether a specific part of the signal is a spike or not is important for any kind of other subsequent preprocessing steps, like spike sorting or burst detection in order to reduce the classification of erroneously identified spikes. Many spike detection algorithms have already been suggested, all working reasonably well whenever the signal-to-noise ratio is large enough. When the noise level is high, however, these algorithms have a poor performance. Approach. In this paper we present two new spike detection algorithms. The first is based on a stationary wavelet energy operator and the second is based on the time-frequency representation of spikes. Both algorithms are more reliable than all of the most commonly used methods. Main results. The performance of the algorithms is confirmed by using simulated data, resembling original data recorded from cortical neurons with multielectrode arrays. In order to demonstrate that the performance of the algorithms is not restricted to only one specific set of data, we also verify the performance using a simulated publicly available data set. We show that both proposed algorithms have the best performance under all tested methods, regardless of the signal-to-noise ratio in both data sets. Significance. This contribution will redound to the benefit of electrophysiological investigations of human cells. Especially the spatial and temporal analysis of neural network communications is improved by using the proposed spike detection algorithms.

  17. Monitoring activity in neural circuits with genetically encoded indicators

    Directory of Open Access Journals (Sweden)

    Gerard Joseph Broussard

    2014-12-01

    Full Text Available Recent developments in genetically encoded indicators of neural activity (GINAs have greatly advanced the field of systems neuroscience. As they are encoded by DNA, GINAs can be targeted to genetically defined cellular populations. Combined with fluorescence microscopy, most notably multi-photon imaging, GINAs allow chronic simultaneous optical recordings from large populations of neurons or glial cells in awake, behaving mammals, particularly rodents. This large-scale recording of neural activity at multiple temporal and spatial scales has greatly advanced our understanding of the dynamics of neural circuitry underlying behavior—a critical first step toward understanding the complexities of brain function, such as sensorimotor integration and learning.Here, we summarize the recent development and applications of the major classes of GINAs. In particular, we take an in-depth look at the design of available GINA families with a particular focus on genetically encoded calcium indicators, sensors probing synaptic activity, and genetically encoded voltage indicators. Using the family of the genetically encoded calcium indicator GCaMP as an example, we review established sensor optimization pipelines. We also discuss practical considerations for end users of GINAs about experimental methods including approaches for gene delivery, imaging system requirements, and data analysis techniques. With the growing toolbox of GINAs and with new microscopy techniques pushing beyond their current limits, the age of light can finally achieve the goal of broad and dense sampling of neuronal activity across time and brain structures to obtain a dynamic picture of brain function.

  18. A reanalysis of “Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons” [version 1; referees: 2 approved

    OpenAIRE

    Rainer Engelken; Farzad Farkhooi; David Hansel; Carl van Vreeswijk; Fred Wolf

    2016-01-01

    Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks o...

  19. Advanced target identification in STN-DBS with beta power of combined local field potentials and spiking activity.

    Science.gov (United States)

    Verhagen, Rens; Zwartjes, Daphne G M; Heida, Tjitske; Wiegers, Evita C; Contarino, M Fiorella; de Bie, Rob M A; van den Munckhof, Pepijn; Schuurman, P Richard; Veltink, Peter H; Bour, Lo J

    2015-09-30

    In deep brain stimulation of the subthalamic nucleus (STN-DBS) for Parkinson's Disease (PD), often microelectrode recordings (MER) are used for STN identification. However, for advanced target identification of the sensorimotor STN, it may be relevant to use local field potential (LFP) recordings. Then, it is important to assure that the measured oscillations are coming from the close proximity of the electrode. Through multiple simultaneous recordings of LFP and neuronal spiking, we investigated the temporal relationship between local neuronal spiking and more global LFP. We analyzed the local oscillations in the LFP by calculating power only over specific frequencies that show a significant coherence between LFP and neuronal spiking. Using this 'coherence method', we investigated how well measurements in the sensorimotor STN could be discriminated from measurements elsewhere in the STN. The 'sensorimotor power index' (SMPI) of beta frequencies, representing the ability to discriminate sensorimotor STN measurements based on the beta power, was significantly larger using the 'coherence method' for LFP spectral analysis compared to other methods where either the complete LFP beta spectrum or only the prominent peaks in the LFP beta spectrum were used to calculate beta power. The results suggest that due to volume conduction of beta frequency oscillations, proper localization of the sensorimotor STN with only LFP recordings is difficult. However, combining recordings of LFP and neuronal spiking and calculating beta power over the coherent parts of the LFP spectrum can be beneficial in discriminating the sensorimotor STN. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Persistent activity in neural networks with dynamic synapses.

    Directory of Open Access Journals (Sweden)

    Omri Barak

    2007-02-01

    Full Text Available Persistent activity states (attractors, observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.

  1. Functional signature of recovering cortex: dissociation of local field potentials and spiking activity in somatosensory cortices of spinal cord injured monkeys.

    Science.gov (United States)

    Wang, Zheng; Qi, Hui-Xin; Kaas, Jon H; Roe, Anna W; Chen, Li Min

    2013-11-01

    After disruption of dorsal column afferents at high cervical spinal levels in adult monkeys, somatosensory cortical neurons recover responsiveness to tactile stimulation of the hand; this reactivation correlates with a recovery of hand use. However, it is not known if all neuronal response properties recover, and whether different cortical areas recover in a similar manner. To address this, we recorded neuronal activity in cortical area 3b and S2 in adult squirrel monkeys weeks after unilateral lesion of the dorsal columns. We found that in response to vibrotactile stimulation, local field potentials remained robust at all frequency ranges. However, neuronal spiking activity failed to follow at high frequencies (≥15 Hz). We suggest that the failure to generate spiking activity at high stimulus frequency reflects a changed balance of inhibition and excitation in both area 3b and S2, and that this mismatch in spiking and local field potential is a signature of an early phase of recovering cortex (

  2. Development of modularity in the neural activity of children's brains

    OpenAIRE

    Chen, Man; Deem, Michael W.

    2015-01-01

    We study how modularity of the human brain changes as children develop into adults. Theory suggests that modularity can enhance the response function of a networked system subject to changing external stimuli. Thus, greater cognitive performance might be achieved for more modular neural activity, and modularity might likely increase as children develop. The value of modularity calculated from fMRI data is observed to increase during childhood development and peak in young adulthood. Head moti...

  3. Bioinorganic Life and Neural Activity: Toward a Chemistry of Consciousness?

    Science.gov (United States)

    Chang, Christopher J

    2017-03-21

    Identifying what elements are required for neural activity as potential path toward consciousness, which represents life with the state or quality of awareness, is a "Holy Grail" of chemistry. As life itself arises from coordinated interactions between elements across the periodic table, the majority of which are metals, new approaches for analysis, binding, and control of these primary chemical entities can help enrich our understanding of inorganic chemistry in living systems in a context that is both universal and personal.

  4. Hybrid spiking models.

    Science.gov (United States)

    Izhikevich, Eugene M

    2010-11-13

    I review a class of hybrid models of neurons that combine continuous spike-generation mechanisms and a discontinuous 'after-spike' reset of state variables. Unlike Hodgkin-Huxley-type conductance-based models, the hybrid spiking models have a few parameters derived from the bifurcation theory; instead of matching neuronal electrophysiology, they match neuronal dynamics. I present a method of after-spike resetting suitable for hardware implementation of such models, and a hybrid numerical method for simulations of large-scale biological spiking networks.

  5. Quantitative Comparative Analysis of the Bio-Active and Toxic Constituents of Leaves and Spikes of Schizonepeta tenuifolia at Different Harvesting Times

    Directory of Open Access Journals (Sweden)

    Anwei Ding

    2011-10-01

    Full Text Available A GC-MS-Selected Ion Monitoring (SIM detection method was developed for simultaneous determination of four monoterpenes: (--menthone, (+-pulegone, (--limonene and (+-menthofuran as the main bio-active and toxic constituents, and four other main compounds in the volatile oils of Schizonepeta tenuifolia (ST leaves and spikes at different harvesting times. The results showed that the method was simple, sensitive and reproducible, and that harvesting time was a possible key factor in influencing the quality of ST leaves, but not its spikes. The research might be helpful for determining the harvesting time of ST samples and establishing a validated method for the quality control of ST volatile oil and other relative products.

  6. Compensatory Neural Activity in Response to Cognitive Fatigue.

    Science.gov (United States)

    Wang, Chao; Trongnetrpunya, Amy; Samuel, Immanuel Babu Henry; Ding, Mingzhou; Kluger, Benzi M

    2016-04-06

    Prolonged continuous performance of a cognitively demanding task induces cognitive fatigue and is associated with a time-related deterioration of objective performance, the degree of which is referred to cognitive fatigability. Although the neural underpinnings of cognitive fatigue are poorly understood, prior studies report changes in neural activity consistent with deterioration of task-related networks over time. While compensatory brain activity is reported to maintain motor task performance in the face of motor fatigue and cognitive performance in the face of other stressors (e.g., aging) and structural changes, there are no studies to date demonstrating compensatory activity for cognitive fatigue. High-density electroencephalography was recorded from human subjects during a 160 min continuous performance of a cognitive control task. While most time-varying neural activity showed a linear decline over time, we identified an evoked potential over the anterior frontal region which demonstrated an inverted U-shaped time-on-task profile. This evoked brain activity peaked between 60 and 100 min into the task and was positively associated with better behavioral performance only during this interval. Following the peak and during subsequent decline of this anterior frontal activity, the rate of performance decline also accelerated. These findings demonstrate that this anterior frontal brain activity, which is not part of the primary task-related activity at baseline, is recruited to compensate for fatigue-induced impairments in the primary task-related network, and that this compensation terminates as cognitive fatigue further progresses. These findings may be relevant to understanding individual differences in cognitive fatigability and developing interventions for clinical conditions afflicted by fatigue. Fatigue refers to changes in objective performance and subjective effort induced by continuous task performance. We examined the neural underpinnings of cognitive

  7. Which model to use for cortical spiking neurons?

    Science.gov (United States)

    Izhikevich, Eugene M

    2004-09-01

    We discuss the biological plausibility and computational efficiency of some of the most useful models of spiking and bursting neurons. We compare their applicability to large-scale simulations of cortical neural networks.

  8. The generation effect: activating broad neural circuits during memory encoding.

    Science.gov (United States)

    Rosner, Zachary A; Elman, Jeremy A; Shimamura, Arthur P

    2013-01-01

    The generation effect is a robust memory phenomenon in which actively producing material during encoding acts to improve later memory performance. In a functional magnetic resonance imaging (fMRI) analysis, we explored the neural basis of this effect. During encoding, participants generated synonyms from word-fragment cues (e.g., GARBAGE-W_ST_) or read other synonym pairs (e.g., GARBAGE-WASTE). Compared to simply reading target words, generating target words significantly improved later recognition memory performance. During encoding, this benefit was associated with a broad neural network that involved both prefrontal (inferior frontal gyrus, middle frontal gyrus) and posterior cortex (inferior temporal gyrus, lateral occipital cortex, parahippocampal gyrus, ventral posterior parietal cortex). These findings define the prefrontal-posterior cortical dynamics associated with the mnemonic benefits underlying the generation effect. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Mixed Mode Oscillations and Synchronous Activity in Noise Induced Modified Morris-Lecar Neural System

    Science.gov (United States)

    Upadhyay, Ranjit Kumar; Mondal, Argha; Teka, Wondimu W.

    The modified three-dimensional (3D) Morris-Lecar (M-L) model is very useful to understand the spiking activities of neurons. The present article addresses the random dynamical behavior of a modified M-L model driven by a white Gaussian noise with mean zero and unit spectral density. The applied stimulus can be expressed as a random term. Such random perturbations are represented by a white Gaussian noise current added through the electrical potential of membrane of the excitatory principal cells. The properties of the stochastic system (perturbed one) and noise induced mixed mode oscillation are analyzed. The Lyapunov spectrum is computed to present the nature of the system dynamics. The noise intensity is varied while keeping fixed the predominant parameters of the model in their ranges and also observed the changes in the dynamical behavior of the system. The dynamical synchronization is studied in the coupled M-L systems interconnected by excitatory and inhibitory neurons with noisy electrical coupling and verified with similarity functions. This result suggests the potential benefits of noise and noise induced oscillations which have been observed in real neurons and how that affects the dynamics of the neural model as well as the coupled systems. The analysis reports that the modified M-L system which has the limit cycle behavior can show a type of phase locking behavior which follows either period adding (i.e. 1:1, 2:1, 3:1, 4:1) sequences or Farey sequences. For the coupled neural systems, complete synchronization is shown for sufficient noisy coupling strength.

  10. Non-Euclidean properties of spike train metric spaces.

    Science.gov (United States)

    Aronov, Dmitriy; Victor, Jonathan D

    2004-06-01

    Quantifying the dissimilarity (or distance) between two sequences is essential to the study of action potential (spike) trains in neuroscience and genetic sequences in molecular biology. In neuroscience, traditional methods for sequence comparisons rely on techniques appropriate for multivariate data, which typically assume that the space of sequences is intrinsically Euclidean. More recently, metrics that do not make this assumption have been introduced for comparison of neural activity patterns. These metrics have a formal resemblance to those used in the comparison of genetic sequences. Yet the relationship between such metrics and the traditional Euclidean distances has remained unclear. We show, both analytically and computationally, that the geometries associated with metric spaces of event sequences are intrinsically non-Euclidean. Our results demonstrate that metric spaces enrich the study of neural activity patterns, since accounting for perceptual spaces requires a non-Euclidean geometry.

  11. The effects of gratitude expression on neural activity.

    Science.gov (United States)

    Kini, Prathik; Wong, Joel; McInnis, Sydney; Gabana, Nicole; Brown, Joshua W

    2016-03-01

    Gratitude is a common aspect of social interaction, yet relatively little is known about the neural bases of gratitude expression, nor how gratitude expression may lead to longer-term effects on brain activity. To address these twin issues, we recruited subjects who coincidentally were entering psychotherapy for depression and/or anxiety. One group participated in a gratitude writing intervention, which required them to write letters expressing gratitude. The therapy-as-usual control group did not perform a writing intervention. After three months, subjects performed a "Pay It Forward" task in the fMRI scanner. In the task, subjects were repeatedly endowed with a monetary gift and then asked to pass it on to a charitable cause to the extent they felt grateful for the gift. Operationalizing gratitude as monetary gifts allowed us to engage the subjects and quantify the gratitude expression for subsequent analyses. We measured brain activity and found regions where activity correlated with self-reported gratitude experience during the task, even including related constructs such as guilt motivation and desire to help as statistical controls. These were mostly distinct from brain regions activated by empathy or theory of mind. Also, our between groups cross-sectional study found that a simple gratitude writing intervention was associated with significantly greater and lasting neural sensitivity to gratitude - subjects who participated in gratitude letter writing showed both behavioral increases in gratitude and significantly greater neural modulation by gratitude in the medial prefrontal cortex three months later. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Multineuronal Spike Sequences Repeat with Millisecond Precision

    Directory of Open Access Journals (Sweden)

    Koki eMatsumoto

    2013-06-01

    Full Text Available Cortical microcircuits are nonrandomly wired by neurons. As a natural consequence, spikes emitted by microcircuits are also nonrandomly patterned in time and space. One of the prominent spike organizations is a repetition of fixed patterns of spike series across multiple neurons. However, several questions remain unsolved, including how precisely spike sequences repeat, how the sequences are spatially organized, how many neurons participate in sequences, and how different sequences are functionally linked. To address these questions, we monitored spontaneous spikes of hippocampal CA3 neurons ex vivo using a high-speed functional multineuron calcium imaging technique that allowed us to monitor spikes with millisecond resolution and to record the location of spiking and nonspiking neurons. Multineuronal spike sequences were overrepresented in spontaneous activity compared to the statistical chance level. Approximately 75% of neurons participated in at least one sequence during our observation period. The participants were sparsely dispersed and did not show specific spatial organization. The number of sequences relative to the chance level decreased when larger time frames were used to detect sequences. Thus, sequences were precise at the millisecond level. Sequences often shared common spikes with other sequences; parts of sequences were subsequently relayed by following sequences, generating complex chains of multiple sequences.

  13. Long-range correlations in rabbit brain neural activity.

    Science.gov (United States)

    de la Fuente, I M; Perez-Samartin, A L; Martínez, L; Garcia, M A; Vera-Lopez, A

    2006-02-01

    We have analyzed the presence of persistence properties in rabbit brain electrical signals by means of non-equilibrium statistical physics tools. To measure long-memory properties of these experimental signals, we have first determined whether the data are fractional Gaussian noise (fGn) or fractional Brownian motion (fBm) by calculating the slope of the power spectral density plot of the series. The results show that the series correspond to fBm. Then, the data were studied by means of the bridge detrended scaled windowed variance analysis, detecting long-term correlation. Three different types of experimental signals have been studied: neural basal activity without stimulation, the response induced by a single flash light stimulus and the average of the activity evoked by 200 flash light stimulations. Analysis of the series revealed the existence of persistent behavior in all cases. Moreover, the results also exhibited an increasing correlation in the level of long-term memory from recordings without stimulation, to one sweep recording or 200 sweeps averaged recordings. Thus, brain neural electrical activity is affected not only by its most recent states, but also by previous states much more distant in the past.

  14. Time Multiplexed Active Neural Probe with 1356 Parallel Recording Sites

    Directory of Open Access Journals (Sweden)

    Bogdan C. Raducanu

    2017-10-01

    Full Text Available We present a high electrode density and high channel count CMOS (complementary metal-oxide-semiconductor active neural probe containing 1344 neuron sized recording pixels (20 µm × 20 µm and 12 reference pixels (20 µm × 80 µm, densely packed on a 50 µm thick, 100 µm wide, and 8 mm long shank. The active electrodes or pixels consist of dedicated in-situ circuits for signal source amplification, which are directly located under each electrode. The probe supports the simultaneous recording of all 1356 electrodes with sufficient signal to noise ratio for typical neuroscience applications. For enhanced performance, further noise reduction can be achieved while using half of the electrodes (678. Both of these numbers considerably surpass the state-of-the art active neural probes in both electrode count and number of recording channels. The measured input referred noise in the action potential band is 12.4 µVrms, while using 678 electrodes, with just 3 µW power dissipation per pixel and 45 µW per read-out channel (including data transmission.

  15. How adaptation shapes spike rate oscillations in recurrent neuronal networks

    Directory of Open Access Journals (Sweden)

    Moritz eAugustin

    2013-02-01

    Full Text Available Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 Hz to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.

  16. Sociocultural patterning of neural activity during self-reflection

    DEFF Research Database (Denmark)

    Ma, Yina; Bang, Dan; Wang, Chenbo

    2014-01-01

    Western cultures encourage self-construals independent of social contexts whereas East Asian cultures foster interdependent self-construals that rely on how others perceive the self. How are culturally specific self-construals mediated by the human brain? Using functional MRI, we monitored neural...... that judgments of self vs. a public figure elicited greater activation in the medial prefrontal cortex (mPFC) in Danish than in Chinese participants regardless of attribute dimensions for judgments. However, self-judgments of social attributes induced greater activity in the temporoparietal junction (TPJ......) in Chinese than in Danish participants. Moreover, the group difference in TPJ activity was mediated by a measure of a cultural value (i.e., interdependence of self-construal). Our findings suggest that individuals in different sociocultural contexts may learn and/or adopt distinct strategies for self...

  17. Evoking prescribed spike times in stochastic neurons

    Science.gov (United States)

    Doose, Jens; Lindner, Benjamin

    2017-09-01

    Single cell stimulation in vivo is a powerful tool to investigate the properties of single neurons and their functionality in neural networks. We present a method to determine a cell-specific stimulus that reliably evokes a prescribed spike train with high temporal precision of action potentials. We test the performance of this stimulus in simulations for two different stochastic neuron models. For a broad range of parameters and a neuron firing with intermediate firing rates (20-40 Hz) the reliability in evoking the prescribed spike train is close to its theoretical maximum that is mainly determined by the level of intrinsic noise.

  18. Towards statistical summaries of spike train data.

    Science.gov (United States)

    Wu, Wei; Srivastava, Anuj

    2011-01-30

    Statistical inference has an important role in analysis of neural spike trains. While current approaches are mostly model-based, and designed for capturing the temporal evolution of the underlying stochastic processes, we focus on a data-driven approach where statistics are defined and computed in function spaces where individual spike trains are viewed as points. The first contribution of this paper is to endow spike train space with a parameterized family of metrics that takes into account different time warpings and generalizes several currently used metrics. These metrics are essentially penalized L(p) norms, involving appropriate functions of spike trains, with penalties associated with time-warpings. The second contribution of this paper is to derive a notion of a mean spike train in the case when p=2. We present an efficient recursive algorithm, termed Matching-Minimization algorithm, to compute the sample mean of a set of spike trains. The proposed metrics as well as the mean computations are demonstrated using an experimental recording from the motor cortex. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Sensory-related neural activity regulates the structure of vascular networks in the cerebral cortex

    Science.gov (United States)

    Lacoste, Baptiste; Comin, Cesar H.; Ben-Zvi, Ayal; Kaeser, Pascal S.; Xu, Xiaoyin; Costa, Luciano da F.; Gu, Chenghua

    2014-01-01

    SUMMARY Neurovascular interactions are essential for proper brain function. While the effect of neural activity on cerebral blood flow has been extensively studied, whether neural activity influences vascular patterning remains elusive. Here, we demonstrate that neural activity promotes the formation of vascular networks in the early postnatal mouse barrel cortex. Using a combination of genetics, imaging, and computational tools to allow simultaneous analysis of neuronal and vascular components, we found that vascular density and branching were decreased in the barrel cortex when sensory input was reduced by either a complete deafferentation, a genetic impairment of neurotransmitter release at thalamocortical synapses, or a selective reduction of sensory-related neural activity by whisker plucking. In contrast, enhancement of neural activity by whisker stimulation led to an increase in vascular density and branching. The finding that neural activity is necessary and sufficient to trigger alterations of vascular networks reveals a novel feature of neurovascular interactions. PMID:25155955

  20. Sensory-related neural activity regulates the structure of vascular networks in the cerebral cortex.

    Science.gov (United States)

    Lacoste, Baptiste; Comin, Cesar H; Ben-Zvi, Ayal; Kaeser, Pascal S; Xu, Xiaoyin; Costa, Luciano da F; Gu, Chenghua

    2014-09-03

    Neurovascular interactions are essential for proper brain function. While the effect of neural activity on cerebral blood flow has been extensively studied, whether or not neural activity influences vascular patterning remains elusive. Here, we demonstrate that neural activity promotes the formation of vascular networks in the early postnatal mouse barrel cortex. Using a combination of genetics, imaging, and computational tools to allow simultaneous analysis of neuronal and vascular components, we found that vascular density and branching were decreased in the barrel cortex when sensory input was reduced by either a complete deafferentation, a genetic impairment of neurotransmitter release at thalamocortical synapses, or a selective reduction of sensory-related neural activity by whisker plucking. In contrast, enhancement of neural activity by whisker stimulation led to an increase in vascular density and branching. The finding that neural activity is necessary and sufficient to trigger alterations of vascular networks reveals an important feature of neurovascular interactions. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Neural activity in the hippocampus during conflict resolution.

    Science.gov (United States)

    Sakimoto, Yuya; Okada, Kana; Hattori, Minoru; Takeda, Kozue; Sakata, Shogo

    2013-01-15

    This study examined configural association theory and conflict resolution models in relation to hippocampal neural activity during positive patterning tasks. According to configural association theory, the hippocampus is important for responses to compound stimuli in positive patterning tasks. In contrast, according to the conflict resolution model, the hippocampus is important for responses to single stimuli in positive patterning tasks. We hypothesized that if configural association theory is applicable, and not the conflict resolution model, the hippocampal theta power should be increased when compound stimuli are presented. If, on the other hand, the conflict resolution model is applicable, but not configural association theory, then the hippocampal theta power should be increased when single stimuli are presented. If both models are valid and applicable in the positive patterning task, we predict that the hippocampal theta power should be increased by presentation of both compound and single stimuli during the positive patterning task. To examine our hypotheses, we measured hippocampal theta power in rats during a positive patterning task. The results showed that hippocampal theta power increased during the presentation of a single stimulus, but did not increase during the presentation of a compound stimulus. This finding suggests that the conflict resolution model is more applicable than the configural association theory for describing neural activity during positive patterning tasks. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. The versatility of RhoA activities in neural differentiation.

    Science.gov (United States)

    Horowitz, Arie; Yang, Junning; Cai, Jingli; Iacovitti, Lorraine

    2017-01-26

    In this commentary we discuss a paper we published recently on the activities of the GTPase RhoA during neural differentiation of murine embryonic stem cells, and relate our findings to previous studies. We narrate how we found that RhoA impedes neural differentiation by inhibiting the production as well as the secretion of noggin, a soluble factor that antagonizes bone morphogenetic protein. We discuss how the questions we tried to address shaped the study, and how embryonic stem cells isolated from a genetically modified mouse model devoid of Syx, a RhoA-specific guanine exchange factor, were used to address them. We detail several signaling pathways downstream of RhoA that are hindered by the absence of Syx, and obstructed by retinoic acid, resulting in an increase of noggin production; we explain how the lower RhoA activity and, consequently, the sparser peri-junctional stress fibers in Syx -/- cells facilitated noggin secretion; and we report unpublished results showing that pharmacological inhibition of RhoA accelerates the neuronal differentiation of human embryonic stem cells. Finally, we identify signaling mechanisms in our recent study that warrant further study, and speculate on the possibility of manipulating RhoA signaling in combination with other pathways to drive the differentiation of neuronal subtypes.

  3. Persistence and storage of activity patterns in spiking recurrent cortical networks:Modulation of sigmoid signals by after-hyperpolarization currents and acetylcholine

    Directory of Open Access Journals (Sweden)

    Jesse ePalma

    2012-06-01

    Full Text Available Many cortical networks contain recurrent architectures that transform input patterns before storing them in short-term memory (STM. Theorems in the 1970’s showed how feedback signal functions in rate-based recurrent on-center off-surround networks control this process. A sigmoid signal function induces a quenching threshold below which inputs are suppressed as noise and above which they are contrast-enhanced before pattern storage. This article describes how changes in feedback signaling, neuromodulation, and recurrent connectivity may alter pattern processing in recurrent on-center off-surround networks of spiking neurons. In spiking neurons, fast, medium, and slow after-hyperpolarization (AHP currents control sigmoid signal threshold and slope. Modulation of AHP currents by acetylcholine (ACh can change sigmoid shape and, with it, network dynamics. For example, decreasing signal function threshold and increasing slope can lengthen the persistence of a partially contrast-enhanced pattern, increase the number of active cells stored in STM, or, if connectivity is distance-dependent, cause cell activities to cluster. These results clarify how cholinergic modulation by the basal forebrain may alter the vigilance of category learning circuits, and thus their sensitivity to predictive mismatches, thereby controlling whether learned categories code concrete or abstract features, as predicted by Adaptive Resonance Theory. The analysis includes global, distance-dependent, and interneuron-mediated circuits. With an appropriate degree of recurrent excitation and inhibition, spiking networks maintain a partially contrast-enhanced pattern for 800 milliseconds or longer after stimuli offset, then resolve to no stored pattern, or to winner-take-all stored patterns with one or multiple winners. Strengthening inhibition prolongs a partially contrast-enhanced pattern by slowing the transition to stability, while strengthening excitation causes more winners

  4. Synaptically activated Ca2+ waves and NMDA spikes locally suppress voltage-dependent Ca2+ signalling in rat pyramidal cell dendrites.

    Science.gov (United States)

    Manita, Satoshi; Miyazaki, Kenichi; Ross, William N

    2011-10-15

    Postsynaptic [Ca(2+)](i) changes contribute to several kinds of plasticity in pyramidal neurons. We examined the effects of synaptically activated Ca(2+) waves and NMDA spikes on subsequent Ca(2+) signalling in CA1 pyramidal cell dendrites in hippocampal slices. Tetanic synaptic stimulation evoked a localized Ca(2+) wave in the primary apical dendrites. The [Ca(2+)](i) increase from a backpropagating action potential (bAP) or subthreshold depolarization was reduced if it was generated immediately after the wave. The suppression had a recovery time of 30-60 s. The suppression only occurred where the wave was generated and was not due to a change in bAP amplitude or shape. The suppression also could be generated by Ca(2+) waves evoked by uncaging IP(3), showing that other signalling pathways activated by the synaptic tetanus were not required. The suppression was proportional to the amplitude of the [Ca(2+)](i) change of the Ca(2+) wave and was not blocked by a spectrum of kinase or phosphatase inhibitors, consistent with suppression due to Ca(2+)-dependent inactivation of Ca(2+) channels. The waves also reduced the frequency and amplitude of spontaneous, localized Ca(2+) release events in the dendrites by a different mechanism, probably by depleting the stores at the site of wave generation. The same synaptic tetanus often evoked NMDA spike-mediated [Ca(2+)](i) increases in the oblique dendrites where Ca(2+) waves do not propagate. These NMDA spikes suppressed the [Ca(2+)](i) increase caused by bAPs in those regions. [Ca(2+)](i) increases by Ca(2+) entry through voltage-gated Ca(2+) channels also suppressed the [Ca(2+)](i) increases from subsequent bAPs in regions where the voltage-gated [Ca(2+)](i) increases were largest, showing that all ways of raising [Ca(2+)](i) could cause suppression.

  5. Stimulus-dependent spiking relationships with the EEG

    Science.gov (United States)

    Snyder, Adam C.

    2015-01-01

    The development and refinement of noninvasive techniques for imaging neural activity is of paramount importance for human neuroscience. Currently, the most accessible and popular technique is electroencephalography (EEG). However, nearly all of what we know about the neural events that underlie EEG signals is based on inference, because of the dearth of studies that have simultaneously paired EEG recordings with direct recordings of single neurons. From the perspective of electrophysiologists there is growing interest in understanding how spiking activity coordinates with large-scale cortical networks. Evidence from recordings at both scales highlights that sensory neurons operate in very distinct states during spontaneous and visually evoked activity, which appear to form extremes in a continuum of coordination in neural networks. We hypothesized that individual neurons have idiosyncratic relationships to large-scale network activity indexed by EEG signals, owing to the neurons' distinct computational roles within the local circuitry. We tested this by recording neuronal populations in visual area V4 of rhesus macaques while we simultaneously recorded EEG. We found substantial heterogeneity in the timing and strength of spike-EEG relationships and that these relationships became more diverse during visual stimulation compared with the spontaneous state. The visual stimulus apparently shifts V4 neurons from a state in which they are relatively uniformly embedded in large-scale network activity to a state in which their distinct roles within the local population are more prominent, suggesting that the specific way in which individual neurons relate to EEG signals may hold clues regarding their computational roles. PMID:26108954

  6. Capturing Spike Variability in Noisy Izhikevich Neurons Using Point Process Generalized Linear Models.

    Science.gov (United States)

    Østergaard, Jacob; Kramer, Mark A; Eden, Uri T

    2018-01-01

    To understand neural activity, two broad categories of models exist: statistical and dynamical. While statistical models possess rigorous methods for parameter estimation and goodness-of-fit assessment, dynamical models provide mechanistic insight. In general, these two categories of models are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input current. We then fit these spike train data with a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured.

  7. Neural oscillations: beta band activity across motor networks.

    Science.gov (United States)

    Khanna, Preeya; Carmena, Jose M

    2015-06-01

    Local field potential (LFP) activity in motor cortical and basal ganglia regions exhibits prominent beta (15-40Hz) oscillations during reaching and grasping, muscular contraction, and attention tasks. While in vitro and computational work has revealed specific mechanisms that may give rise to the frequency and duration of this oscillation, there is still controversy about what behavioral processes ultimately drive it. Here, simultaneous behavioral and large-scale neural recording experiments from non-human primate and human subjects are reviewed in the context of specific hypotheses about how beta band activity is generated. Finally, a new experimental paradigm utilizing operant conditioning combined with motor tasks is proposed as a way to further investigate this oscillation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Social power and approach-related neural activity

    Science.gov (United States)

    Smolders, Ruud; Cremer, David De

    2012-01-01

    It has been argued that power activates a general tendency to approach whereas powerlessness activates a tendency to inhibit. The assumption is that elevated power involves reward-rich environments, freedom and, as a consequence, triggers an approach-related motivational orientation and attention to rewards. In contrast, reduced power is associated with increased threat, punishment and social constraint and thereby activates inhibition-related motivation. Moreover, approach motivation has been found to be associated with increased relative left-sided frontal brain activity, while withdrawal motivation has been associated with increased right sided activations. We measured EEG activity while subjects engaged in a task priming either high or low social power. Results show that high social power is indeed associated with greater left-frontal brain activity compared to low social power, providing the first neural evidence for the theory that high power is associated with approach-related motivation. We propose a framework accounting for differences in both approach motivation and goal-directed behaviour associated with different levels of power. PMID:19304842

  9. Modulation of Neural Activity during Guided Viewing of Visual Art

    Directory of Open Access Journals (Sweden)

    Guillermo Herrera-Arcos

    2017-11-01

    Full Text Available Mobile Brain-Body Imaging (MoBI technology was deployed to record multi-modal data from 209 participants to examine the brain’s response to artistic stimuli at the Museo de Arte Contemporáneo (MARCO in Monterrey, México. EEG signals were recorded as the subjects walked through the exhibit in guided groups of 6–8 people. Moreover, guided groups were either provided with an explanation of each art piece (Guided-E, or given no explanation (Guided-NE. The study was performed using portable Muse (InteraXon, Inc, Toronto, ON, Canada headbands with four dry electrodes located at AF7, AF8, TP9, and TP10. Each participant performed a baseline (BL control condition devoid of artistic stimuli and selected his/her favorite piece of art (FP during the guided tour. In this study, we report data related to participants’ demographic information and aesthetic preference as well as effects of art viewing on neural activity (EEG in a select subgroup of 18–30 year-old subjects (Nc = 25 that generated high-quality EEG signals, on both BL and FP conditions. Dependencies on gender, sensor placement, and presence or absence of art explanation were also analyzed. After denoising, clustering of spectral EEG models was used to identify neural patterns associated with BL and FP conditions. Results indicate statistically significant suppression of beta band frequencies (15–25 Hz in the prefrontal electrodes (AF7 and AF8 during appreciation of subjects’ favorite painting, compared to the BL condition, which was significantly different from EEG responses to non-favorite paintings (NFP. No significant differences in brain activity in relation to the presence or absence of explanation during exhibit tours were found. Moreover, a frontal to posterior asymmetry in neural activity was observed, for both BL and FP conditions. These findings provide new information about frequency-related effects of preferred art viewing in brain activity, and support the view that art

  10. Modulation of Neural Activity during Guided Viewing of Visual Art

    Science.gov (United States)

    Herrera-Arcos, Guillermo; Tamez-Duque, Jesús; Acosta-De-Anda, Elsa Y.; Kwan-Loo, Kevin; de-Alba, Mayra; Tamez-Duque, Ulises; Contreras-Vidal, Jose L.; Soto, Rogelio

    2017-01-01

    Mobile Brain-Body Imaging (MoBI) technology was deployed to record multi-modal data from 209 participants to examine the brain’s response to artistic stimuli at the Museo de Arte Contemporáneo (MARCO) in Monterrey, México. EEG signals were recorded as the subjects walked through the exhibit in guided groups of 6–8 people. Moreover, guided groups were either provided with an explanation of each art piece (Guided-E), or given no explanation (Guided-NE). The study was performed using portable Muse (InteraXon, Inc, Toronto, ON, Canada) headbands with four dry electrodes located at AF7, AF8, TP9, and TP10. Each participant performed a baseline (BL) control condition devoid of artistic stimuli and selected his/her favorite piece of art (FP) during the guided tour. In this study, we report data related to participants’ demographic information and aesthetic preference as well as effects of art viewing on neural activity (EEG) in a select subgroup of 18–30 year-old subjects (Nc = 25) that generated high-quality EEG signals, on both BL and FP conditions. Dependencies on gender, sensor placement, and presence or absence of art explanation were also analyzed. After denoising, clustering of spectral EEG models was used to identify neural patterns associated with BL and FP conditions. Results indicate statistically significant suppression of beta band frequencies (15–25 Hz) in the prefrontal electrodes (AF7 and AF8) during appreciation of subjects’ favorite painting, compared to the BL condition, which was significantly different from EEG responses to non-favorite paintings (NFP). No significant differences in brain activity in relation to the presence or absence of explanation during exhibit tours were found. Moreover, a frontal to posterior asymmetry in neural activity was observed, for both BL and FP conditions. These findings provide new information about frequency-related effects of preferred art viewing in brain activity, and support the view that art

  11. Modulation of Neural Activity during Guided Viewing of Visual Art.

    Science.gov (United States)

    Herrera-Arcos, Guillermo; Tamez-Duque, Jesús; Acosta-De-Anda, Elsa Y; Kwan-Loo, Kevin; de-Alba, Mayra; Tamez-Duque, Ulises; Contreras-Vidal, Jose L; Soto, Rogelio

    2017-01-01

    Mobile Brain-Body Imaging (MoBI) technology was deployed to record multi-modal data from 209 participants to examine the brain's response to artistic stimuli at the Museo de Arte Contemporáneo (MARCO) in Monterrey, México. EEG signals were recorded as the subjects walked through the exhibit in guided groups of 6-8 people. Moreover, guided groups were either provided with an explanation of each art piece (Guided-E), or given no explanation (Guided-NE). The study was performed using portable Muse (InteraXon, Inc, Toronto, ON, Canada) headbands with four dry electrodes located at AF7, AF8, TP9, and TP10. Each participant performed a baseline (BL) control condition devoid of artistic stimuli and selected his/her favorite piece of art (FP) during the guided tour. In this study, we report data related to participants' demographic information and aesthetic preference as well as effects of art viewing on neural activity (EEG) in a select subgroup of 18-30 year-old subjects (Nc = 25) that generated high-quality EEG signals, on both BL and FP conditions. Dependencies on gender, sensor placement, and presence or absence of art explanation were also analyzed. After denoising, clustering of spectral EEG models was used to identify neural patterns associated with BL and FP conditions. Results indicate statistically significant suppression of beta band frequencies (15-25 Hz) in the prefrontal electrodes (AF7 and AF8) during appreciation of subjects' favorite painting, compared to the BL condition, which was significantly different from EEG responses to non-favorite paintings (NFP). No significant differences in brain activity in relation to the presence or absence of explanation during exhibit tours were found. Moreover, a frontal to posterior asymmetry in neural activity was observed, for both BL and FP conditions. These findings provide new information about frequency-related effects of preferred art viewing in brain activity, and support the view that art appreciation is

  12. Relationships between spike-free local field potentials and spike timing in human temporal cortex.

    Science.gov (United States)

    Zanos, Stavros; Zanos, Theodoros P; Marmarelis, Vasilis Z; Ojemann, George A; Fetz, Eberhard E

    2012-04-01

    Intracortical recordings comprise both fast events, action potentials (APs), and slower events, known as local field potentials (LFPs). Although it is believed that LFPs mostly reflect local synaptic activity, it is unclear which of their signal components are most closely related to synaptic potentials and would therefore be causally related to the occurrence of individual APs. This issue is complicated by the significant contribution from AP waveforms, especially at higher LFP frequencies. In recordings of single-cell activity and LFPs from the human temporal cortex, we computed quantitative, nonlinear, causal dynamic models for the prediction of AP timing from LFPs, at millisecond resolution, before and after removing AP contributions to the LFP. In many cases, the timing of a significant number of single APs could be predicted from spike-free LFPs at different frequencies. Not surprisingly, model performance was superior when spikes were not removed. Cells whose activity was predicted by the spike-free LFP models generally fell into one of two groups: in the first group, neuronal spike activity was associated with specific phases of low LFP frequencies, lower spike activity at high LFP frequencies, and a stronger linear component in the spike-LFP model; in the second group, neuronal spike activity was associated with larger amplitude of high LFP frequencies, less frequent phase locking, and a stronger nonlinear model component. Spike timing in the first group was better predicted by the sign and level of the LFP preceding the spike, whereas spike timing in the second group was better predicted by LFP power during a certain time window before the spike.

  13. Maturation of spiking activity in trout retinal ganglion cells coincides with upregulation of Kv3.1- and BK-related potassium channels.

    Science.gov (United States)

    Henne, Jutta; Jeserich, Gunnar

    2004-01-01

    Developmental changes in membrane excitability and the potassium channel profile were monitored in acutely isolated trout retinal ganglion cells by patch-clamp recording in combination with single-cell RT-PCR. During embryonic development in the egg, a sustained above-threshold stimulation of ganglion cells elicited in most cases only a single spike response. After hatching, the proportion of multiply spiking cells increased strongly and the ability of spike frequency coding was acquired. This was accompanied by the occurrence of a highly tetraethylammonium (TEA)- and quinine-sensitive delayed rectifier current, which gradually masked a rapidly inactivating A-type potassium current that was predominant at earlier stages. Pharmacology of the delayed rectifier current closely matched those of recombinant Traw1, a Kv3.1-related potassium channel in trout. The appearance of this current correlated closely with initial expression of Traw1 and Traw2 channel transcripts, as revealed by multiplex single-cell RT-PCR, whereas mRNA, encoding Shaker-related channel genes in trout (termed Tsha1-Tsha4), were already detectable at early embryonic stages. Iberiotoxin-sensitive, calcium-activated potassium currents (BK) were extremely low before hatching, but increased significantly thereafter. These developmental changes in potassium channel expression occurred after the arrival of retinal fibers in the optic tectum and the initiation of synapse formation in the visual center. It is suggested that early expressed Shaker-related potassium channels could act to influence neuronal differentiation, whereas proper neuronal signaling requires expression of Kv3.1- and BK-related potassium channels. Copyright 2003 Wiley-Liss, Inc.

  14. Chitinase-resistant hydrophilic symbiotic factors secreted by Frankia activate both Ca(2+) spiking and NIN gene expression in the actinorhizal plant Casuarina glauca.

    Science.gov (United States)

    Chabaud, Mireille; Gherbi, Hassen; Pirolles, Elodie; Vaissayre, Virginie; Fournier, Joëlle; Moukouanga, Daniel; Franche, Claudine; Bogusz, Didier; Tisa, Louis S; Barker, David G; Svistoonoff, Sergio

    2016-01-01

    Although it is now well-established that decorated lipo-chitooligosaccharide Nod factors are the key rhizobial signals which initiate infection/nodulation in host legume species, the identity of the equivalent microbial signaling molecules in the Frankia/actinorhizal association remains elusive. With the objective of identifying Frankia symbiotic factors we present a novel approach based on both molecular and cellular pre-infection reporters expressed in the model actinorhizal species Casuarina glauca. By introducing the nuclear-localized cameleon Nup-YC2.1 into Casuarina glauca we show that cell-free culture supernatants of the compatible Frankia CcI3 strain are able to elicit sustained high frequency Ca(2+) spiking in host root hairs. Furthermore, an excellent correlation exists between the triggering of nuclear Ca(2+) spiking and the transcriptional activation of the ProCgNIN:GFP reporter as a function of the Frankia strain tested. These two pre-infection symbiotic responses have been used in combination to show that the signal molecules present in the Frankia CcI3 supernatant are hydrophilic, of low molecular weight and resistant to chitinase degradation. In conclusion, the biologically active symbiotic signals secreted by Frankia appear to be chemically distinct from the currently known chitin-based rhizobial/arbuscular mycorrhizal signaling molecules. Convenient bioassays in Casuarina glauca are now available for their full characterization. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  15. Noise-robust unsupervised spike sorting based on discriminative subspace learning with outlier handling

    Science.gov (United States)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2017-06-01

    Objective. Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. Approach. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Main results. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. Significance. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.

  16. Spike-timing dependent plasticity and the cognitive map.

    Science.gov (United States)

    Bush, Daniel; Philippides, Andrew; Husbands, Phil; O'Shea, Michael

    2010-01-01

    Since the discovery of place cells - single pyramidal neurons that encode spatial location - it has been hypothesized that the hippocampus may act as a cognitive map of known environments. This putative function has been extensively modeled using auto-associative networks, which utilize rate-coded synaptic plasticity rules in order to generate strong bi-directional connections between concurrently active place cells that encode for neighboring place fields. However, empirical studies using hippocampal cultures have demonstrated that the magnitude and direction of changes in synaptic strength can also be dictated by the relative timing of pre- and post-synaptic firing according to a spike-timing dependent plasticity (STDP) rule. Furthermore, electrophysiology studies have identified persistent "theta-coded" temporal correlations in place cell activity in vivo, characterized by phase precession of firing as the corresponding place field is traversed. It is not yet clear if STDP and theta-coded neural dynamics are compatible with cognitive map theory and previous rate-coded models of spatial learning in the hippocampus. Here, we demonstrate that an STDP rule based on empirical data obtained from the hippocampus can mediate rate-coded Hebbian learning when pre- and post-synaptic activity is stochastic and has no persistent sequence bias. We subsequently demonstrate that a spiking recurrent neural network that utilizes this STDP rule, alongside theta-coded neural activity, allows the rapid development of a cognitive map during directed or random exploration of an environment of overlapping place fields. Hence, we establish that STDP and phase precession are compatible with rate-coded models of cognitive map development.

  17. Spike-timing dependent plasticity and the cognitive map

    Directory of Open Access Journals (Sweden)

    Daniel eBush

    2010-10-01

    Full Text Available Since the discovery of place cells – single pyramidal neurons that encode spatial location – it has been hypothesised that the hippocampus may act as a cognitive map of known environments. This putative function has been extensively modelled using auto-associative networks, which utilise rate-coded synaptic plasticity rules in order to generate strong bi-directional connections between concurrently active place cells that encode for neighbouring place fields. However, empirical studies using hippocampal cultures have demonstrated that the magnitude and direction of changes in synaptic strength can also be dictated by the relative timing of pre- and post- synaptic firing according to a spike-timing dependent plasticity (STDP rule. Furthermore, electrophysiology studies have identified persistent ‘theta-coded’ temporal correlations in place cell activity in vivo, characterised by phase precession of firing as the corresponding place field is traversed. It is not yet clear if STDP and theta-coded neural dynamics are compatible with cognitive map theory and previous rate-coded models of spatial learning in the hippocampus. Here, we demonstrate that an STDP rule based on empirical data obtained from the hippocampus can mediate rate-coded Hebbian learning when pre- and post- synaptic activity is stochastic and has no persistent sequence bias. We subsequently demonstrate that a spiking recurrent neural network that utilises this STDP rule, alongside theta-coded neural activity, allows the rapid development of a cognitive map during directed or random exploration of an environment of overlapping place fields. Hence, we establish that STDP and phase precession are compatible with rate-coded models of cognitive map development.

  18. Adaptive robotic control driven by a versatile spiking cerebellar network.

    Science.gov (United States)

    Casellato, Claudia; Antonietti, Alberto; Garrido, Jesus A; Carrillo, Richard R; Luque, Niceto R; Ros, Eduardo; Pedrocchi, Alessandra; D'Angelo, Egidio

    2014-01-01

    The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.

  19. Adaptive robotic control driven by a versatile spiking cerebellar network.

    Directory of Open Access Journals (Sweden)

    Claudia Casellato

    Full Text Available The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning, a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.

  20. Solving constraint satisfaction problems with networks of spiking neurons

    Directory of Open Access Journals (Sweden)

    Zeno eJonke

    2016-03-01

    Full Text Available Network of neurons in the brain apply – unlike processors in our current generation ofcomputer hardware – an event-based processing strategy, where short pulses (spikes areemitted sparsely by neurons to signal the occurrence of an event at a particular point intime. Such spike-based computations promise to be substantially more power-efficient thantraditional clocked processing schemes. However it turned out to be surprisingly difficult todesign networks of spiking neurons that can solve difficult computational problems on the levelof single spikes (rather than rates of spikes. We present here a new method for designingnetworks of spiking neurons via an energy function. Furthermore we show how the energyfunction of a network of stochastically firing neurons can be shaped in a quite transparentmanner by composing the networks of simple stereotypical network motifs. We show that thisdesign approach enables networks of spiking neurons to produce approximate solutions todifficult (NP-hard constraint satisfaction problems from the domains of planning/optimizationand verification/logical inference. The resulting networks employ noise as a computationalresource. Nevertheless the timing of spikes (rather than just spike rates plays an essential rolein their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines and Gibbs sampling.

  1. Pontas evocadas por estímulos somatossensitivos e atividade epileptiforme no eletrencefalograma em crianças "normais" Somatosensory evoked spikes and epileptiform activity in "normal" children

    Directory of Open Access Journals (Sweden)

    Lineu C. Fonseca

    2003-09-01

    Full Text Available Estudamos a ocorrência de potenciais de alta voltagem evocados por estímulos somatossensitivos - pontas evocadas (PE - e de atividade epileptiforme espontânea (AE no EEG de 173 crianças normais de 7 a 11 anos de idade. Durante o EEG, dez percussões foram realizadas nas mãos e pés. Foi avaliada a ocorrência de PE acompanhando cada um dos estímulos e a presença de AE. AE foi observada em quatro crianças (2,3%: pontas centroparietais em duas, complexos de ponta-onda lenta generalizados em uma e pontas parietais e temporais médias em uma. Uma menina de 10 anos de idade (0,58% teve ao EEG pontas parietais medianas evocadas pela percussão do pé esquerdo. Este EEG era normal quanto a outros aspectos. Nossos achados de AE em crianças normais são similares aos encontrados em estudos de outros países. Constatamos que espículas somatossensitivas podem ser observadas em crianças normais o que sugere uma natureza funcional ligada à idade.Little is known about somatosensory evoked spikes (SES in the EEG of normal children. We studied the occurrence of SES and spontaneous epileptiform activity (SEA in 173 normal children ageg 7 to 11 years. During the EEG ten taps were applied to both hands and feet. The occurrence of high voltage potentials evoked by each stimulation of one or both heels or hands (SES and the occurrence of SEA were evaluated. SEA was observed in four children (2.3 %: central/parietal spikes in two cases, generalized spike-and-wave in one, and parietal/midtemporal spikes in one case. A ten years old girl (0,58% had SES on median parietal region by tapping the left foot. This EEG was otherwise normal. Our findings of SEA are similar to those obtained in other normal populations. SES can be observed in normal children. These SES suggest that we are dealing with an age-related functional phenomenon.

  2. Nonlinear modeling of neural population dynamics for hippocampal prostheses.

    Science.gov (United States)

    Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; Hampson, Robert E; Deadwyler, Sam A; Berger, Theodore W

    2009-11-01

    Developing a neural prosthesis for the damaged hippocampus requires restoring the transformation of population neural activities performed by the hippocampal circuitry. To bypass a damaged region, output spike trains need to be predicted from the input spike trains and then reinstated through stimulation. We formulate a multiple-input, multiple-output (MIMO) nonlinear dynamic model for the input-output transformation of spike trains. In this approach, a MIMO model comprises a series of physiologically-plausible multiple-input, single-output (MISO) neuron models that consist of five components each: (1) feedforward Volterra kernels transforming the input spike trains into the synaptic potential, (2) a feedback kernel transforming the output spikes into the spike-triggered after-potential, (3) a noise term capturing the system uncertainty, (4) an adder generating the pre-threshold potential, and (5) a threshold function generating output spikes. It is shown that this model is equivalent to a generalized linear model with a probit link function. To reduce model complexity and avoid overfitting, statistical model selection and cross-validation methods are employed to choose the significant inputs and interactions between inputs. The model is applied successfully to the hippocampal CA3-CA1 population dynamics. Such a model can serve as a computational basis for the development of hippocampal prostheses.

  3. Natural lecithin promotes neural network complexity and activity.

    Science.gov (United States)

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-05-27

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called "essential" fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications.

  4. Neural activity reveals perceptual grouping in working memory.

    Science.gov (United States)

    Rabbitt, Laura R; Roberts, Daniel M; McDonald, Craig G; Peterson, Matthew S

    2017-03-01

    There is extensive evidence that the contralateral delay activity (CDA), a scalp recorded event-related brain potential, provides a reliable index of the number of objects held in visual working memory. Here we present evidence that the CDA not only indexes visual object working memory, but also the number of locations held in spatial working memory. In addition, we demonstrate that the CDA can be predictably modulated by the type of encoding strategy employed. When individual locations were held in working memory, the pattern of CDA modulation mimicked previous findings for visual object working memory. Specifically, CDA amplitude increased monotonically until working memory capacity was reached. However, when participants were instructed to group individual locations to form a constellation, the CDA was prolonged and reached an asymptote at two locations. This result provides neural evidence for the formation of a unitary representation of multiple spatial locations. Published by Elsevier B.V.

  5. Nonsmooth dynamics in spiking neuron models

    Science.gov (United States)

    Coombes, S.; Thul, R.; Wedgwood, K. C. A.

    2012-11-01

    Large scale studies of spiking neural networks are a key part of modern approaches to understanding the dynamics of biological neural tissue. One approach in computational neuroscience has been to consider the detailed electrophysiological properties of neurons and build vast computational compartmental models. An alternative has been to develop minimal models of spiking neurons with a reduction in the dimensionality of both parameter and variable space that facilitates more effective simulation studies. In this latter case the single neuron model of choice is often a variant of the classic integrate-and-fire model, which is described by a nonsmooth dynamical system. In this paper we review some of the more popular spiking models of this class and describe the types of spiking pattern that they can generate (ranging from tonic to burst firing). We show that a number of techniques originally developed for the study of impact oscillators are directly relevant to their analysis, particularly those for treating grazing bifurcations. Importantly we highlight one particular single neuron model, capable of generating realistic spike trains, that is both computationally cheap and analytically tractable. This is a planar nonlinear integrate-and-fire model with a piecewise linear vector field and a state dependent reset upon spiking. We call this the PWL-IF model and analyse it at both the single neuron and network level. The techniques and terminology of nonsmooth dynamical systems are used to flesh out the bifurcation structure of the single neuron model, as well as to develop the notion of Lyapunov exponents. We also show how to construct the phase response curve for this system, emphasising that techniques in mathematical neuroscience may also translate back to the field of nonsmooth dynamical systems. The stability of periodic spiking orbits is assessed using a linear stability analysis of spiking times. At the network level we consider linear coupling between voltage

  6. A high performing brain-machine interface driven by low-frequency local field potentials alone and together with spikes

    Science.gov (United States)

    Stavisky, Sergey D.; Kao, Jonathan C.; Nuyujukian, Paul; Ryu, Stephen I.; Shenoy, Krishna V.

    2015-06-01

    Objective. Brain-machine interfaces (BMIs) seek to enable people with movement disabilities to directly control prosthetic systems with their neural activity. Current high performance BMIs are driven by action potentials (spikes), but access to this signal often diminishes as sensors degrade over time. Decoding local field potentials (LFPs) as an alternative or complementary BMI control signal may improve performance when there is a paucity of spike signals. To date only a small handful of LFP decoding methods have been tested online; there remains a need to test different LFP decoding approaches and improve LFP-driven performance. There has also not been a reported demonstration of a hybrid BMI that decodes kinematics from both LFP and spikes. Here we first evaluate a BMI driven by the local motor potential (LMP), a low-pass filtered time-domain LFP amplitude feature. We then combine decoding of both LMP and spikes to implement a hybrid BMI. Approach. Spikes and LFP were recorded from two macaques implanted with multielectrode arrays in primary and premotor cortex while they performed a reaching task. We then evaluated closed-loop BMI control using biomimetic decoders driven by LMP, spikes, or both signals together. Main results. LMP decoding enabled quick and accurate cursor control which surpassed previously reported LFP BMI performance. Hybrid decoding of both spikes and LMP improved performance when spikes signal quality was mediocre to poor. Significance. These findings show that LMP is an effective BMI control signal which requires minimal power to extract and can substitute for or augment impoverished spikes signals. Use of this signal may lengthen the useful lifespan of BMIs and is therefore an important step towards clinically viable BMIs.

  7. A high performing brain-machine interface driven by low-frequency local field potentials alone and together with spikes.

    Science.gov (United States)

    Stavisky, Sergey D; Kao, Jonathan C; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V

    2015-06-01

    Brain-machine interfaces (BMIs) seek to enable people with movement disabilities to directly control prosthetic systems with their neural activity. Current high performance BMIs are driven by action potentials (spikes), but access to this signal often diminishes as sensors degrade over time. Decoding local field potentials (LFPs) as an alternative or complementary BMI control signal may improve performance when there is a paucity of spike signals. To date only a small handful of LFP decoding methods have been tested online; there remains a need to test different LFP decoding approaches and improve LFP-driven performance. There has also not been a reported demonstration of a hybrid BMI that decodes kinematics from both LFP and spikes. Here we first evaluate a BMI driven by the local motor potential (LMP), a low-pass filtered time-domain LFP amplitude feature. We then combine decoding of both LMP and spikes to implement a hybrid BMI. Spikes and LFP were recorded from two macaques implanted with multielectrode arrays in primary and premotor cortex while they performed a reaching task. We then evaluated closed-loop BMI control using biomimetic decoders driven by LMP, spikes, or both signals together. LMP decoding enabled quick and accurate cursor control which surpassed previously reported LFP BMI performance. Hybrid decoding of both spikes and LMP improved performance when spikes signal quality was mediocre to poor. These findings show that LMP is an effective BMI control signal which requires minimal power to extract and can substitute for or augment impoverished spikes signals. Use of this signal may lengthen the useful lifespan of BMIs and is therefore an important step towards clinically viable BMIs.

  8. To compare the effect of Active Neural Mobilization during Intermittent Lumbar Traction and Intermittent Lumbar Traction followed by Active Neural Mobilization in cases of Lumbar Radiculopathy

    OpenAIRE

    Jaywant Nagulkar; Kalyani Nagulkar

    2016-01-01

    To compare the effectiveness of Active neural mobilization (ANM) during intermittent lumbar traction (ILT) and intermittent lumbar traction followed by active neural mobilization treatment in patients of low back pain (LBP) with radiculopathy.. To study the effect of ANM during ILT and ILT followed by ANM in patients of LBP with radiculopathy on VAS scale, P1 angle of SLR, P2 angle of SLR and Oswestry disability index(ODI). To compare the effect of ANM during ILT and ILT followed ...

  9. Low and high gamma oscillations in rat ventral striatum have distinct relationships to behavior, reward, and spiking activity on a learned spatial decision task

    Directory of Open Access Journals (Sweden)

    Matthijs A A Van Der Meer

    2009-06-01

    Full Text Available Local field potential (LFP oscillations in the brain reflect organization thought to be important for perception, attention, movement, and memory. In the basal ganglia, including dorsal striatum, dysfunctional LFP states are associated with Parkinson’s disease, while in healthy subjects, dorsal striatal LFPs have been linked to decision-making processes. However, LFPs in ventral striatum have been less studied. We report that in rats running a spatial decision task, prominent gamma-50 (45-55 Hz and gamma-80 (70-85 Hz oscillations in ventral striatum had distinct relationships to behavior, task events, and spiking activity. Gamma-50 power increased sharply following reward delivery and before movement initiation, while in contrast, gamma-80 power ramped up gradually to reward locations. Gamma-50 power was low and contained little structure during early learning, but rapidly developed a stable pattern, while gamma-80 power was initially high before returning to a stable level within a similar timeframe. Putative fast-spiking interneurons (FSIs showed phase, firing rate, and coherence relationships with gamma-50 and gamma-80, indicating that the observed LFP patterns are locally relevant. Furthermore, in a number of FSIs such relationships were specific to gamma-50 or gamma-80, suggesting that partially distinct FSI populations mediate the effects of gamma-50 and gamma-80.

  10. Identification of non-linear models of neural activity in bold fmri

    DEFF Research Database (Denmark)

    Jacobsen, Daniel Jakup; Madsen, Kristoffer Hougaard; Hansen, Lars Kai

    2006-01-01

    Non-linear hemodynamic models express the BOLD signal as a nonlinear, parametric functional of the temporal sequence of local neural activity. Several models have been proposed for this neural activity. We identify one such parametric model by estimating the distribution of its parameters. These ...

  11. Evidence-Based Systematic Review: Effects of Neuromuscular Electrical Stimulation on Swallowing and Neural Activation

    Science.gov (United States)

    Clark, Heather; Lazarus, Cathy; Arvedson, Joan; Schooling, Tracy; Frymark, Tobi

    2009-01-01

    Purpose: To systematically review the literature examining the effects of neuromuscular electrical stimulation (NMES) on swallowing and neural activation. The review was conducted as part of a series examining the effects of oral motor exercises (OMEs) on speech, swallowing, and neural activation. Method: A systematic search was conducted to…

  12. Neural Network Hydrological Modelling: Linear Output Activation Functions?

    Science.gov (United States)

    Abrahart, R. J.; Dawson, C. W.

    2005-12-01

    The power to represent non-linear hydrological processes is of paramount importance in neural network hydrological modelling operations. The accepted wisdom requires non-polynomial activation functions to be incorporated in the hidden units such that a single tier of hidden units can thereafter be used to provide a 'universal approximation' to whatever particular hydrological mechanism or function is of interest to the modeller. The user can select from a set of default activation functions, or in certain software packages, is able to define their own function - the most popular options being logistic, sigmoid and hyperbolic tangent. If a unit does not transform its inputs it is said to possess a 'linear activation function' and a combination of linear activation functions will produce a linear solution; whereas the use of non-linear activation functions will produce non-linear solutions in which the principle of superposition does not hold. For hidden units, speed of learning and network complexities are important issues. For the output units, it is desirable to select an activation function that is suited to the distribution of the target values: e.g. binary targets (logistic); categorical targets (softmax); continuous-valued targets with a bounded range (logistic / tanh); positive target values with no known upper bound (exponential; but beware of overflow); continuous-valued targets with no known bounds (linear). It is also standard practice in most hydrological applications to use the default software settings and to insert a set of identical non-linear activation functions in the hidden layer and output layer processing units. Mixed combinations have nevertheless been reported in several hydrological modelling papers and the full ramifications of such activities requires further investigation and assessment i.e. non-linear activation functions in the hidden units connected to linear or clipped-linear activation functions in the output unit. There are two

  13. Spiking Neuron Network Helmholtz Machine

    Directory of Open Access Journals (Sweden)

    Pavel eSountsov

    2015-04-01

    Full Text Available An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.

  14. Cross-scale effects of neural interactions during human neocortical seizure activity

    NARCIS (Netherlands)

    Eissa, Tahra L.; Dijkstra, Koen; Brune, Christoph; Emerson, Ronald G.; Van Putten, Michel J. A. M.; Goodman, Robert R.; McKhann Jr., Guy M.; Schevon, Catherine A.; van Drongelen, Wim; van Gils, Stephan A.

    2017-01-01

    Small-scale neuronal networks may impose widespread effects on large network dynamics. To unravel this relationship, we analyzed eight multiscale recordings of spontaneous seizures from four patients with epilepsy. During seizures, multiunit spike activity organizes into a submillimeter-sized

  15. Effects of Onion (Allium cepa L.) Extract Administration on Intestinal α-Glucosidases Activities and Spikes in Postprandial Blood Glucose Levels in SD Rats Model

    Science.gov (United States)

    Kim, Sun-Ho; Jo, Sung-Hoon; Kwon, Young-In; Hwang, Jae-Kwan

    2011-01-01

    Diets high in calories and sweetened foods with disaccharides frequently lead to exaggerated postprandial spikes in blood glucose. This state induces immediate oxidant stress and free radicals which trigger oxidative stress-linked diabetic complications. One of the therapeutic approaches for decreasing postprandial hyperglycemia is to retard absorption of glucose by the inhibition of carbohydrate hydrolyzing enzymes, α-amylase and α-glucosidases, in the digestive organs. Therefore, the inhibitory activity of Korean onion (Allium cepa L.) extract against rat intestinal α-glucosidases, such as sucrase, maltase, and porcine pancreatic α-amylase were investigated in vitro and in vivo. The content of quercetin in ethyl alcohol extract of onion skin (EOS) was 6.04 g/100 g dried weight of onion skin. The in vitro half-maximal inhibitory concentrations (IC50) of EOS and quercetin, a major phenolic in onion, on rat intestinal sucrase were 0.40 and 0.11 mg/mL, respectively. The postprandial blood glucose lowering effects of EOS and quercetin were compared to a known type 2 diabetes drug (Acarbose), a strong α-glucosidase inhibitor in the Sprague-Dawley (SD) rat model. In rats fed on sucrose, EOS significantly reduced the blood glucose spike after sucrose loading. The area under the blood glucose-time curve (AUClast) in EOS-treated SD rats (0.5 g-EOS/kg) was significantly lower than in untreated SD rats (259.6 ± 5.1 vs. 283.1 ± 19.2 h·mg/dL). The AUClast in quercetin-treated SD rats (0.5 g-quercetin/kg) was similar to in EOS-treated group (256.1 ± 3.2 vs. 259.6 ± 5.1 h·mg/dL). Results from this study indicates that although quercetin does have blood glucose lowering potential via α-glucosidase inhibition, there are other bioactive compounds present in onion skin. Furthermore, the effects of two weeks administration of EOS in a high carbohydrate-dietary mixture (Pico 5053) on sucrase and maltase activities in intestine were evaluated in SD rat model. Compared to

  16. Effects of Onion (Allium cepa L. Extract Administration on Intestinal α-Glucosidases Activities and Spikes in Postprandial Blood Glucose Levels in SD Rats Model

    Directory of Open Access Journals (Sweden)

    Sun-Ho Kim

    2011-06-01

    Full Text Available Diets high in calories and sweetened foods with disaccharides frequently lead to exaggerated postprandial spikes in blood glucose. This state induces immediate oxidant stress and free radicals which trigger oxidative stress-linked diabetic complications. One of the therapeutic approaches for decreasing postprandial hyperglycemia is to retard absorption of glucose by the inhibition of carbohydrate hydrolyzing enzymes,α-amylase and α-glucosidases, in the digestive organs. Therefore, the inhibitory activity of Korean onion (Allium cepa L. extract against rat intestinal α-glucosidases, such as sucrase, maltase, and porcine pancreatic α-amylase were investigated in vitro and in vivo. The content of quercetin in ethyl alcohol extract of onion skin (EOS was 6.04 g/100 g dried weight of onion skin. The in vitro half-maximal inhibitory concentrations (IC50 of EOS and quercetin, a major phenolic in onion, on rat intestinal sucrase were 0.40 and 0.11 mg/mL, respectively. The postprandial blood glucose lowering effects of EOS and quercetin were compared to a known type 2 diabetes drug (Acarbose, a strong α-glucosidase inhibitor in the Sprague-Dawley (SD rat model. In rats fed on sucrose, EOS significantly reduced the blood glucose spike after sucrose loading. The area under the blood glucose-time curve (AUClast in EOS-treated SD rats (0.5 g-EOS/kg was significantly lower than in untreated SD rats (259.6 ± 5.1 vs. 283.1 ± 19.2 h·mg/dL. The AUClast in quercetin-treated SD rats (0.5 g-quercetin/kg was similar to in EOS-treated group (256.1 ± 3.2 vs. 259.6 ± 5.1 h·mg/dL. Results from this study indicates that although quercetin does have blood glucose lowering potential via α-glucosidase inhibition, there are other bioactive compounds present in onion skin. Furthermore, the effects of two weeks administration of EOS in a high carbohydrate-dietary mixture (Pico 5053 on sucrase and maltase activities in intestine were evaluated in SD rat model

  17. Effects of onion (Allium cepa L.) extract administration on intestinal α-glucosidases activities and spikes in postprandial blood glucose levels in SD rats model.

    Science.gov (United States)

    Kim, Sun-Ho; Jo, Sung-Hoon; Kwon, Young-In; Hwang, Jae-Kwan

    2011-01-01

    Diets high in calories and sweetened foods with disaccharides frequently lead to exaggerated postprandial spikes in blood glucose. This state induces immediate oxidant stress and free radicals which trigger oxidative stress-linked diabetic complications. One of the therapeutic approaches for decreasing postprandial hyperglycemia is to retard absorption of glucose by the inhibition of carbohydrate hydrolyzing enzymes, α-amylase and α-glucosidases, in the digestive organs. Therefore, the inhibitory activity of Korean onion (Allium cepa L.) extract against rat intestinal α-glucosidases, such as sucrase, maltase, and porcine pancreatic α-amylase were investigated in vitro and in vivo. The content of quercetin in ethyl alcohol extract of onion skin (EOS) was 6.04 g/100 g dried weight of onion skin. The in vitro half-maximal inhibitory concentrations (IC(50)) of EOS and quercetin, a major phenolic in onion, on rat intestinal sucrase were 0.40 and 0.11 mg/mL, respectively. The postprandial blood glucose lowering effects of EOS and quercetin were compared to a known type 2 diabetes drug (Acarbose), a strong α-glucosidase inhibitor in the Sprague-Dawley (SD) rat model. In rats fed on sucrose, EOS significantly reduced the blood glucose spike after sucrose loading. The area under the blood glucose-time curve (AUC(last)) in EOS-treated SD rats (0.5 g-EOS/kg) was significantly lower than in untreated SD rats (259.6 ± 5.1 vs. 283.1 ± 19.2 h·mg/dL). The AUC(last) in quercetin-treated SD rats (0.5 g-quercetin/kg) was similar to in EOS-treated group (256.1 ± 3.2 vs. 259.6 ± 5.1 h·mg/dL). Results from this study indicates that although quercetin does have blood glucose lowering potential via α-glucosidase inhibition, there are other bioactive compounds present in onion skin. Furthermore, the effects of two weeks administration of EOS in a high carbohydrate-dietary mixture (Pico 5053) on sucrase and maltase activities in intestine were evaluated in SD rat model. Compared

  18. Activity-dependent modulation of neural circuit synaptic connectivity

    Directory of Open Access Journals (Sweden)

    Charles R Tessier

    2009-07-01

    Full Text Available In many nervous systems, the establishment of neural circuits is known to proceed via a two-stage process; 1 early, activity-independent wiring to produce a rough map characterized by excessive synaptic connections, and 2 subsequent, use-dependent pruning to eliminate inappropriate connections and reinforce maintained synapses. In invertebrates, however, evidence of the activity-dependent phase of synaptic refinement has been elusive, and the dogma has long been that invertebrate circuits are “hard-wired” in a purely activity-independent manner. This conclusion has been challenged recently through the use of new transgenic tools employed in the powerful Drosophila system, which have allowed unprecedented temporal control and single neuron imaging resolution. These recent studies reveal that activity-dependent mechanisms are indeed required to refine circuit maps in Drosophila during precise, restricted windows of late-phase development. Such mechanisms of circuit refinement may be key to understanding a number of human neurological diseases, including developmental disorders such as Fragile X syndrome (FXS and autism, which are hypothesized to result from defects in synaptic connectivity and activity-dependent circuit function. This review focuses on our current understanding of activity-dependent synaptic connectivity in Drosophila, primarily through analyzing the role of the fragile X mental retardation protein (FMRP in the Drosophila FXS disease model. The particular emphasis of this review is on the expanding array of new genetically-encoded tools that are allowing cellular events and molecular players to be dissected with ever greater precision and detail.

  19. EEG-fMRI Bayesian framework for neural activity estimation: a simulation study

    Science.gov (United States)

    Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Del Gratta, Cosimo

    2016-12-01

    Objective. Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. Approach. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). Main results. First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. Significance. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.

  20. Cellular and circuit mechanisms maintain low spike co-variability and enhance population coding in somatosensory cortex

    Directory of Open Access Journals (Sweden)

    Cheng eLy

    2012-03-01

    Full Text Available The responses of cortical neurons are highly variable across repeated presentations of a stimulus. Understanding this variability is critical for theories of both sensory and motor processing, since response variance affects the accuracy of neural codes. Despite this influence, the cellular and circuit mechanisms that shape the trial-to-trial variability of population responses remain poorly understood. We used a combination of experimental and computational techniques to uncover the mechanisms underlying response variability of populations of pyramidal (E cells in layer 2/3 of rat whisker barrel cortex. Spike trains recorded from pairs of E-cells during either spontaneous activity or whisker deflected responses show similarly low levels of spiking co-variability, despite large differences in network activation between the two states. We developed network models that show how spike threshold nonlinearities dilutes E-cell spiking co-variability during spontaneous activity and low velocity whisker deflections. In contrast, during high velocity whisker deflections, cancelation mechanisms mediated by feedforward inhibition maintain low E-cell pairwise co-variability. Thus, the combination of these two mechanisms ensure low E-cell population variability over a wide range of whisker deflection velocities. Finally, we show how this active decorrelation of population variability leads to a drastic increase in the population information about whisker velocity. The canonical cellular and circuit components of our study suggest that low network variability over a broad range of neural states may generalize across the nervous system.

  1. Common features of neural activity during singing and sleep periods in a basal ganglia nucleus critical for vocal learning in a juvenile songbird.

    Directory of Open Access Journals (Sweden)

    Shin Yanagihara

    Full Text Available Reactivations of waking experiences during sleep have been considered fundamental neural processes for memory consolidation. In songbirds, evidence suggests the importance of sleep-related neuronal activity in song system motor pathway nuclei for both juvenile vocal learning and maintenance of adult song. Like those in singing motor nuclei, neurons in the basal ganglia nucleus Area X, part of the basal ganglia-thalamocortical circuit essential for vocal plasticity, exhibit singing-related activity. It is unclear, however, whether Area X neurons show any distinctive spiking activity during sleep similar to that during singing. Here we demonstrate that, during sleep, Area X pallidal neurons exhibit phasic spiking activity, which shares some firing properties with activity during singing. Shorter interspike intervals that almost exclusively occurred during singing in awake periods were also observed during sleep. The level of firing variability was consistently higher during singing and sleep than during awake non-singing states. Moreover, deceleration of firing rate, which is considered to be an important firing property for transmitting signals from Area X to the thalamic nucleus DLM, was observed mainly during sleep as well as during singing. These results suggest that songbird basal ganglia circuitry may be involved in the off-line processing potentially critical for vocal learning during sensorimotor learning phase.

  2. Coherency and connectivity in oscillating neural networks: linear partialization analysis

    NARCIS (Netherlands)

    Kalitzin, S.; van Dijk, B. W.; Spekreijse, H.; van Leeuwen, W. A.

    1997-01-01

    This paper studies the relation between the functional synaptic connections between two artificial neural networks and the correlation of their spiking activities. The model neurons had realistic non-oscillatory dynamic properties and the networks showed oscillatory behavior as a result of their

  3. Population spikes in cortical networks during different functional states.

    Directory of Open Access Journals (Sweden)

    Shirley eMark

    2012-07-01

    Full Text Available Brain computational challenges vary between behavioral states. Engaged animals react according to incoming sensory information, while in relaxed and sleeping states consolidation of the learned information is believed to take place. Different states are characterized by different forms of cortical activity. We study a possible neuronal mechanism for generating these diverse dynamics and suggest their possible functional significance. Previous studies demonstrated that brief synchronized increase in a neural firing (Population Spikes can be generated in homogenous recurrent neural networks with short-term synaptic depression. Here we consider more realistic networks with clustered architecture. We show that the level of synchronization in neural activity can be controlled smoothly by network parameters. The network shifts from asynchronous activity to a regime in which clusters synchronized separately, then, the synchronization between the clusters increases gradually to fully synchronized state. We examine the effects of different synchrony levels on the transmission of information by the network. We find that the regime of intermediate synchronization is preferential for the flow of information between sparsely connected areas. Based on these results, we suggest that the regime of intermediate synchronization corresponds to engaged behavioral state of the animal, while global synchronization is exhibited during relaxed and sleeping states.

  4. SPR imaging combined with cyclic voltammetry for the detection of neural activity

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-03-01

    Full Text Available Surface plasmon resonance (SPR detects changes in refractive index at a metal-dielectric interface. In this study, SPR imaging (SPRi combined with cyclic voltammetry (CV was applied to detect neural activity in isolated bullfrog sciatic nerves. The neural activities induced by chemical and electrical stimulation led to an SPR response, and the activities were recorded in real time. The activities of different parts of the sciatic nerve were recorded and compared. The results demonstrated that SPR imaging combined with CV is a powerful tool for the investigation of neural activity.

  5. Motor control by precisely timed spike patterns

    DEFF Research Database (Denmark)

    Srivastava, Kyle H; Holmes, Caroline M; Vellema, Michiel

    2017-01-01

    that the nervous system uses millisecond-scale variations in the timing of spikes within multispike patterns to control a vertebrate behavior-namely, respiration in the Bengalese finch, a songbird. These findings suggest that a fundamental assumption of current theories of motor coding requires revision.......A fundamental problem in neuroscience is understanding how sequences of action potentials ("spikes") encode information about sensory signals and motor outputs. Although traditional theories assume that this information is conveyed by the total number of spikes fired within a specified time...... whether the information in spike timing actually plays a role in brain function. By examining the activity of individual motor units (the muscle fibers innervated by a single motor neuron) and manipulating patterns of activation of these neurons, we provide both correlative and causal evidence...

  6. Different residues in the SARS-CoV spike protein determine cleavage and activation by the host cell protease TMPRSS2.

    Science.gov (United States)

    Reinke, Lennart Michel; Spiegel, Martin; Plegge, Teresa; Hartleib, Anika; Nehlmeier, Inga; Gierer, Stefanie; Hoffmann, Markus; Hofmann-Winkler, Heike; Winkler, Michael; Pöhlmann, Stefan

    2017-01-01

    The spike (S) protein of severe acute respiratory syndrome coronavirus (SARS-CoV) mediates viral entry into target cells. Cleavage and activation of SARS S by a host cell protease is essential for infectious viral entry and the responsible enzymes are potential targets for antiviral intervention. The type II transmembrane serine protease TMPRSS2 cleaves and activates SARS S in cell culture and potentially also in the infected host. Here, we investigated which determinants in SARS S control cleavage and activation by TMPRSS2. We found that SARS S residue R667, a previously identified trypsin cleavage site, is also required for S protein cleavage by TMPRSS2. The cleavage fragments produced by trypsin and TMPRSS2 differed in their decoration with N-glycans, suggesting that these proteases cleave different SARS S glycoforms. Although R667 was required for SARS S cleavage by TMPRSS2, this residue was dispensable for TMPRSS2-mediated S protein activation. Conversely, residue R797, previously reported to be required for SARS S activation by trypsin, was dispensable for S protein cleavage but required for S protein activation by TMPRSS2. Collectively, these results show that different residues in SARS S control cleavage and activation by TMPRSS2, suggesting that these processes are more complex than initially appreciated.

  7. Different host cell proteases activate the SARS-coronavirus spike-protein for cell-cell and virus-cell fusion

    Science.gov (United States)

    Simmons, Graham; Bertram, Stephanie; Glowacka, Ilona; Steffen, Imke; Chaipan, Chawaree; Agudelo, Juliet; Lu, Kai; Rennekamp, Andrew J.; Hofmann, Heike; Bates, Paul; Pöhlmann, Stefan

    2011-01-01

    Severe acute respiratory syndrome coronavirus (SARS-CoV) poses a considerable threat to human health. Activation of the viral spike (S)-protein by host cell proteases is essential for viral infectivity. However, the cleavage sites in SARS-S and the protease(s) activating SARS-S are incompletely defined. We found that R667 was dispensable for SARS-S-driven virus-cell fusion and for SARS-S-activation by trypsin and cathepsin L in a virus-virus fusion assay. Mutation T760R, which optimizes the minimal furin consensus motif 758-RXXR-762, and furin overexpression augmented SARS-S-activity, but did not result in detectable SARS-S cleavage. Finally, SARS-S-driven cell-cell fusion was independent of cathepsin L, a protease essential for virus-cell fusion. Instead, a so far unknown leupeptin-sensitive host cell protease activated cellular SARS-S for fusion with target cells expressing high levels of ACE2. Thus, different host cell proteases activate SARS-S for virus-cell and cell-cell fusion and SARS-S cleavage at R667 and 758-RXXR-762 can be dispensable for SARS-S activation. PMID:21435673

  8. Different residues in the SARS-CoV spike protein determine cleavage and activation by the host cell protease TMPRSS2

    Science.gov (United States)

    Reinke, Lennart Michel; Hartleib, Anika; Nehlmeier, Inga; Gierer, Stefanie; Hoffmann, Markus; Hofmann-Winkler, Heike; Winkler, Michael

    2017-01-01

    The spike (S) protein of severe acute respiratory syndrome coronavirus (SARS-CoV) mediates viral entry into target cells. Cleavage and activation of SARS S by a host cell protease is essential for infectious viral entry and the responsible enzymes are potential targets for antiviral intervention. The type II transmembrane serine protease TMPRSS2 cleaves and activates SARS S in cell culture and potentially also in the infected host. Here, we investigated which determinants in SARS S control cleavage and activation by TMPRSS2. We found that SARS S residue R667, a previously identified trypsin cleavage site, is also required for S protein cleavage by TMPRSS2. The cleavage fragments produced by trypsin and TMPRSS2 differed in their decoration with N-glycans, suggesting that these proteases cleave different SARS S glycoforms. Although R667 was required for SARS S cleavage by TMPRSS2, this residue was dispensable for TMPRSS2-mediated S protein activation. Conversely, residue R797, previously reported to be required for SARS S activation by trypsin, was dispensable for S protein cleavage but required for S protein activation by TMPRSS2. Collectively, these results show that different residues in SARS S control cleavage and activation by TMPRSS2, suggesting that these processes are more complex than initially appreciated. PMID:28636671

  9. Cortical Neural Activity Predicts Sensory Acuity Under Optogenetic Manipulation.

    Science.gov (United States)

    Briguglio, John J; Aizenberg, Mark; Balasubramanian, Vijay; Geffen, Maria N

    2018-02-21

    Excitatory and inhibitory neurons in the mammalian sensory cortex form interconnected circuits that control cortical stimulus selectivity and sensory acuity. Theoretical studies have predicted that suppression of inhibition in such excitatory-inhibitory networks can lead to either an increase or, paradoxically, a decrease in excitatory neuronal firing, with consequent effects on stimulus selectivity. We tested whether modulation of inhibition or excitation in the auditory cortex of male mice could evoke such a variety of effects in tone-evoked responses and in behavioral frequency discrimination acuity. We found that, indeed, the effects of optogenetic manipulation on stimulus selectivity and behavior varied in both magnitude and sign across subjects, possibly reflecting differences in circuitry or expression of optogenetic factors. Changes in neural population responses consistently predicted behavioral changes for individuals separately, including improvement and impairment in acuity. This correlation between cortical and behavioral change demonstrates that, despite the complex and varied effects that these manipulations can have on neuronal dynamics, the resulting changes in cortical activity account for accompanying changes in behavioral acuity. SIGNIFICANCE STATEMENT Excitatory and inhibitory interactions determine stimulus specificity and tuning in sensory cortex, thereby controlling perceptual discrimination acuity. Modeling has predicted that suppressing the activity of inhibitory neurons can lead to increased or, paradoxically, decreased excitatory activity depending on the architecture of the network. Here, we capitalized on differences between subjects to test whether suppressing/activating inhibition and excitation can in fact exhibit such paradoxical effects for both stimulus sensitivity and behavioral discriminability. Indeed, the same optogenetic manipulation in the auditory cortex of different mice could improve or impair frequency discrimination

  10. An application of LSC method for the measurement of gross alpha and beta activities in spiked water and drinking water samples

    Directory of Open Access Journals (Sweden)

    Çakal Gaye Özgür

    2015-09-01

    Full Text Available In this study, after the pulse shape calibration of a liquid scintillation counting (LSC spectrometer (Quantulus 1220, the effi ciency was determined depending on sample quenching parameters. Then, gross alpha and beta activities in two spiked water samples obtained from International Atomic Energy Agency (IAEA were used for the validation of the ASTM D7283-06 method, which is a standard test method for alpha and beta activity in water by LSC. Later, the drinking water samples (35 tap water and 9 bottled water obtained from different districts of Ankara, Turkey, were measured. The maximum gross alpha activities are measured to be 0.08 Bq/L for tap waters and 0.13 Bq/L for bottled waters, whereas the maximum gross beta activities are found to be 0.18 Bq/L for tap waters and 0.16 Bq/L for bottled waters. These results indicate that these drinking water samples are below the required limits, which are 0.1 Bq/L for alpha emitting radionuclides and 1 Bq/L for beta emitting radionuclides. As a result, gross alpha and beta activities in drinking water of Ankara were determined accurately by this validated LSC method. It is also worth noting that LSC is a rapid and accurate method for the determination of gross alpha and beta activities without requiring a tedious sample preparation.

  11. Stochastic optimal control of single neuron spike trains

    DEFF Research Database (Denmark)

    Iolov, Alexandre; Ditlevsen, Susanne; Longtin, Andrë

    2014-01-01

    Objective. External control of spike times in single neurons can reveal important information about a neuron's sub-threshold dynamics that lead to spiking, and has the potential to improve brain–machine interfaces and neural prostheses. The goal of this paper is the design of optimal electrical...... stimulation of a neuron to achieve a target spike train under the physiological constraint to not damage tissue. Approach. We pose a stochastic optimal control problem to precisely specify the spike times in a leaky integrate-and-fire (LIF) model of a neuron with noise assumed to be of intrinsic or synaptic...... of control degrades with increasing intensity of the noise. Simulations show that our algorithms produce the desired results for the LIF model, but also for the case where the neuron dynamics are given by more complex models than the LIF model. This is illustrated explicitly using the Morris–Lecar spiking...

  12. On the relation between encoding and decoding of neuronal spikes.

    Science.gov (United States)

    Koyama, Shinsuke

    2012-06-01

    Neural coding is a field of study that concerns how sensory information is represented in the brain by networks of neurons. The link between external stimulus and neural response can be studied from two parallel points of view. The first, neural encoding, refers to the mapping from stimulus to response. It focuses primarily on understanding how neurons respond to a wide variety of stimuli and constructing models that accurately describe the stimulus-response relationship. Neural decoding refers to the reverse mapping, from response to stimulus, where the challenge is to reconstruct a stimulus from the spikes it evokes. Since neuronal response is stochastic, a one-to-one mapping of stimuli into neural responses does not exist, causing a mismatch between the two viewpoints of neural coding. Here we use these two perspectives to investigate the question of what rate coding is, in the simple setting of a single stationary stimulus parameter and a single stationary spike train represented by a renewal process. We show that when rate codes are defined in terms of encoding, that is, the stimulus parameter is mapped onto the mean firing rate, the rate decoder given by spike counts or the sample mean does not always efficiently decode the rate codes, but it can improve efficiency in reading certain rate codes when correlations within a spike train are taken into account.

  13. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation

    Science.gov (United States)

    Sameiro-Barbosa, Catia M.; Geiser, Eveline

    2016-01-01

    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system. PMID:27559306

  14. Natural Firing Patterns Imply Low Sensitivity of Synaptic Plasticity to Spike Timing Compared with Firing Rate.

    Science.gov (United States)

    Graupner, Michael; Wallisch, Pascal; Ostojic, Srdjan

    2016-11-02

    Synaptic plasticity is sensitive to the rate and the timing of presynaptic and postsynaptic action potentials. In experimental protocols inducing plasticity, the imposed spike trains are typically regular and the relative timing between every presynaptic and postsynaptic spike is fixed. This is at odds with firing patterns observed in the cortex of intact animals, where cells fire irregularly and the timing between presynaptic and postsynaptic spikes varies. To investigate synaptic changes elicited by in vivo-like firing, we used numerical simulations and mathematical analysis of synaptic plasticity models. We found that the influence of spike timing on plasticity is weaker than expected from regular stimulation protocols. Moreover, when neurons fire irregularly, synaptic changes induced by precise spike timing can be equivalently induced by a modest firing rate variation. Our findings bridge the gap between existing results on synaptic plasticity and plasticity occurring in vivo, and challenge the dominant role of spike timing in plasticity. Synaptic plasticity, the change in efficacy of connections between neurons, is thought to underlie learning and memory. The dominant paradigm posits that the precise timing of neural action potentials (APs) is central for plasticity induction. This concept is based on experiments using highly regular and stereotyped patterns of APs, in stark contrast with natural neuronal activity. Using synaptic plasticity models, we investigated how irregular, in vivo-like activity shapes synaptic plasticity. We found that synaptic changes induced by precise timing of APs are much weaker than suggested by regular stimulation protocols, and can be equivalently induced by modest variations of the AP rate alone. Our results call into question the dominant role of precise AP timing for plasticity in natural conditions. Copyright © 2016 Graupner et al.

  15. Development of modularity in the neural activity of children's brains.

    Science.gov (United States)

    Chen, Man; Deem, Michael W

    2015-01-26

    We study how modularity of the human brain changes as children develop into adults. Theory suggests that modularity can enhance the response function of a networked system subject to changing external stimuli. Thus, greater cognitive performance might be achieved for more modular neural activity, and modularity might likely increase as children develop. The value of modularity calculated from functional magnetic resonance imaging (fMRI) data is observed to increase during childhood development and peak in young adulthood. Head motion is deconvolved from the fMRI data, and it is shown that the dependence of modularity on age is independent of the magnitude of head motion. A model is presented to illustrate how modularity can provide greater cognitive performance at short times, i.e. task switching. A fitness function is extracted from the model. Quasispecies theory is used to predict how the average modularity evolves with age, illustrating the increase of modularity during development from children to adults that arises from selection for rapid cognitive function in young adults. Experiments exploring the effect of modularity on cognitive performance are suggested. Modularity may be a potential biomarker for injury, rehabilitation, or disease.

  16. Neural activity associated with metaphor comprehension: spatial analysis.

    Science.gov (United States)

    Sotillo, María; Carretié, Luis; Hinojosa, José A; Tapia, Manuel; Mercado, Francisco; López-Martín, Sara; Albert, Jacobo

    2005-01-03

    Though neuropsychological data indicate that the right hemisphere (RH) plays a major role in metaphor processing, other studies suggest that, at least during some phases of this processing, a RH advantage may not exist. The present study explores, through a temporally agile neural signal--the event-related potentials (ERPs)--, and through source-localization algorithms applied to ERP recordings, whether the crucial phase of metaphor comprehension presents or not a RH advantage. Participants (n=24) were submitted to a S1-S2 experimental paradigm. S1 consisted of visually presented metaphoric sentences (e.g., "Green lung of the city"), followed by S2, which consisted of words that could (i.e., "Park") or could not (i.e., "Semaphore") be defined by S1. ERPs elicited by S2 were analyzed using temporal principal component analysis (tPCA) and source-localization algorithms. These analyses revealed that metaphorically related S2 words showed significantly higher N400 amplitudes than non-related S2 words. Source-localization algorithms showed differential activity between the two S2 conditions in the right middle/superior temporal areas. These results support the existence of an important RH contribution to (at least) one phase of metaphor processing and, furthermore, implicate the temporal cortex with respect to that contribution.

  17. Channelrhodopsins: visual regeneration and neural activation by a light switch

    Science.gov (United States)

    Natasha, G; Tan, Aaron; Farhatnia, Yasmin; Rajadas, Jayakumar; Hamblin, Michael R.; Khaw, Peng T.; Seifalian, Alexander M.

    2013-01-01

    The advent of optogenetics provides a new direction for the field of neuroscience and biotechnology, serving both as a refined investigative tool and as potential cure for many medical conditions via genetic manipulation. Although still in its infancy, recent advances in optogenetics has made it possible to remotely manipulate in vivo cellular functions using light. Coined Nature Methods’ ‘Method of the Year’ in 2010, the optogenetic toolbox has the potential to control cell, tissue and even animal behaviour. This optogenetic toolbox consists of light-sensitive proteins that are able to modulate membrane potential in response to light. Channelrhodopsins (ChR) are light-gated microbial ion channels, which were first described in green algae. ChR2 (a subset of ChR) is a seven transmembrane a helix protein, which evokes membrane depolarization and mediates an action potential upon photostimulation with blue (470 nm) light. By contrast to other seven-transmembrane proteins that require second messengers to open ion channels, ChR2 form ion channels themselves, allowing ultrafast depolarization (within 50 milliseconds of illumination). It has been shown that integration of ChR2 into various tissues of mice can activate neural circuits, control heart muscle contractions, and even restore breathing after spinal cord injury. More compellingly, a plethora of evidence has indicated that artificial expression of ChR2 in retinal ganglion cells can reinstate visual perception in mice with retinal degeneration. PMID:23664865

  18. Macroscopic Description for Networks of Spiking Neurons

    Science.gov (United States)

    Montbrió, Ernest; Pazó, Diego; Roxin, Alex

    2015-04-01

    A major goal of neuroscience, statistical physics, and nonlinear dynamics is to understand how brain function arises from the collective dynamics of networks of spiking neurons. This challenge has been chiefly addressed through large-scale numerical simulations. Alternatively, researchers have formulated mean-field theories to gain insight into macroscopic states of large neuronal networks in terms of the collective firing activity of the neurons, or the firing rate. However, these theories have not succeeded in establishing an exact correspondence between the firing rate of the network and the underlying microscopic state of the spiking neurons. This has largely constrained the range of applicability of such macroscopic descriptions, particularly when trying to describe neuronal synchronization. Here, we provide the derivation of a set of exact macroscopic equations for a network of spiking neurons. Our results reveal that the spike generation mechanism of individual neurons introduces an effective coupling between two biophysically relevant macroscopic quantities, the firing rate and the mean membrane potential, which together govern the evolution of the neuronal network. The resulting equations exactly describe all possible macroscopic dynamical states of the network, including states of synchronous spiking activity. Finally, we show that the firing-rate description is related, via a conformal map, to a low-dimensional description in terms of the Kuramoto order parameter, called Ott-Antonsen theory. We anticipate that our results will be an important tool in investigating how large networks of spiking neurons self-organize in time to process and encode information in the brain.

  19. Millisecond solar radio spikes observed at 1420 MHz

    Science.gov (United States)

    Dabrowski, B. P.; Kus, A. J.

    We present results from observations of narrowband solar millisecond radio spikes at 1420 MHz. Observing data were collected between February 2000 and December 2001 with the 15-m radio telescope at the Centre for Astronomy Nicolaus Copernicus University in Torun, Poland, equipped with a radio spectrograph that covered the 1352-1490 MHz frequency band. The radio spectrograph has 3 MHz frequency resolution and 80 microsecond time resolution. We analyzed the individual radio spike duration, bandwidth and rate of frequency drift. A part of the observed spikes showed well-outlined subtle structures. On dynamic radio spectrograms of the investigated events we notice complex structures formed by numerous individual spikes known as chains of spikes and distinctly different structure of columns. Positions of active regions connected with radio spikes emission were investigated. It turns out that most of them are located near the center of the solar disk, suggesting strong beaming of the spikes emission.

  20. Memories as bifurcations: realization by collective dynamics of spiking neurons under stochastic inputs.

    Science.gov (United States)

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2015-02-01

    How the neural system proceeds from sensory stimuli to generate appropriate behaviors is a basic question that has not yet been fully answered. In contrast to the conventional viewpoint, in which the external stimulus dominantly drives the response behavior, recent studies have revealed that not only external stimuli, but also intrinsic neural dynamics, contribute to the generation of response behavior. In particular, spontaneous activity, which is neural activity without extensive external stimuli, has been found to exhibit similar patterns to those evoked by external inputs, from time to time. In order to further understand the role of this spontaneous activity on the response, we propose a viewpoint, memories-as-bifurcations, that differs from the traditional memories-as-attractors viewpoint. According to this viewpoint, memory is recalled when spontaneous neural activity is changed to an appropriate output activity upon the application of an input. After reviewing the previous rate-coding model embodying this viewpoint, we employ a model of a spiking neuron network that can embed input/output associations, and study the dynamics of collective neural activity. The organized neural activity, which matched the target pattern, is shown to be generated even under application of stochastic input, while the spontaneous activity, which apparently shows noisy dynamics, is found to exhibit selectively higher similarity with evoked activities corresponding to embedded target patterns. These results suggest that such an intrinsic structure in the spontaneous activity might play a role in generating the higher response. The relevance of these results to biological neural processing is also discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Improved measures of phase-coupling between spikes and the Local Field Potential

    NARCIS (Netherlands)

    Vinck, M.; Battaglia, F.P.; Womelsdorf, T.; Pennartz, C.

    2012-01-01

    An important tool to study rhythmic neuronal synchronization is provided by relating spiking activity to the Local Field Potential (LFP). Two types of interdependent spike-LFP measures exist. The first approach is to directly quantify the consistency of single spike-LFP phases across spikes,

  2. Spiking irregularity and frequency modulate the behavioral report of single-neuron stimulation

    NARCIS (Netherlands)

    Doron, G.; Heimendahl, M. von; Schlattmann, P.; Houweling, A.R.; Brecht, M.

    2014-01-01

    The action potential activity of single cortical neurons can evoke measurable sensory effects, but it is not known how spiking parameters and neuronal subtypes affect the evoked sensations. Here, we examined the effects of spike train irregularity, spike frequency, and spike number on the

  3. SNW1 is a critical regulator of spatial BMP activity, neural plate border formation, and neural crest specification in vertebrate embryos.

    Directory of Open Access Journals (Sweden)

    Mary Y Wu

    2011-02-01

    Full Text Available Bone morphogenetic protein (BMP gradients provide positional information to direct cell fate specification, such as patterning of the vertebrate ectoderm into neural, neural crest, and epidermal tissues, with precise borders segregating these domains. However, little is known about how BMP activity is regulated spatially and temporally during vertebrate development to contribute to embryonic patterning, and more specifically to neural crest formation. Through a large-scale in vivo functional screen in Xenopus for neural crest fate, we identified an essential regulator of BMP activity, SNW1. SNW1 is a nuclear protein known to regulate gene expression. Using antisense morpholinos to deplete SNW1 protein in both Xenopus and zebrafish embryos, we demonstrate that dorsally expressed SNW1 is required for neural crest specification, and this is independent of mesoderm formation and gastrulation morphogenetic movements. By exploiting a combination of immunostaining for phosphorylated Smad1 in Xenopus embryos and a BMP-dependent reporter transgenic zebrafish line, we show that SNW1 regulates a specific domain of BMP activity in the dorsal ectoderm at the neural plate border at post-gastrula stages. We use double in situ hybridizations and immunofluorescence to show how this domain of BMP activity is spatially positioned relative to the neural crest domain and that of SNW1 expression. Further in vivo and in vitro assays using cell culture and tissue explants allow us to conclude that SNW1 acts upstream of the BMP receptors. Finally, we show that the requirement of SNW1 for neural crest specification is through its ability to regulate BMP activity, as we demonstrate that targeted overexpression of BMP to the neural plate border is sufficient to restore neural crest formation in Xenopus SNW1 morphants. We conclude that through its ability to regulate a specific domain of BMP activity in the vertebrate embryo, SNW1 is a critical regulator of neural plate

  4. Activity in part of the neural correlates of consciousness reflects integration.

    Science.gov (United States)

    Eriksson, Johan

    2017-10-01

    Integration is commonly viewed as a key process for generating conscious experiences. Accordingly, there should be increased activity within the neural correlates of consciousness when demands on integration increase. We used fMRI and "informational masking" to isolate the neural correlates of consciousness and measured how the associated brain activity changed as a function of required integration. Integration was manipulated by comparing the experience of hearing simple reoccurring tones to hearing harmonic tone triplets. The neural correlates of auditory consciousness included superior temporal gyrus, lateral and medial frontal regions, cerebellum, and also parietal cortex. Critically, only activity in left parietal cortex increased significantly as a function of increasing demands on integration. We conclude that integration can explain part of the neural activity associated with the generation conscious experiences, but that much of associated brain activity apparently reflects other processes. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Physical methods for generating and decoding neural activity in Hirudo verbana

    Science.gov (United States)

    Migliori, Benjamin John

    The interface between living nervous systems and hardware is an excellent proving ground for precision experimental methods and information classification systems. Nervous systems are complex (104 -- 10 15(!) connections), fragile, and highly active in intricate, constantly evolving patterns. However, despite the conveniently electrical nature of neural transmission, the interface between nervous systems and hardware poses significant experimental difficulties. As the desire for direct interfaces with neural signals continues to expand, the need for methods of generating and measuring neural activity with high spatiotemporal precision has become increasingly critical. In this thesis, I describe advances I have made in the ability to modify, generate, measure, and understand neural signals both in- and ex-vivo. I focus on methods developed for transmitting and extracting signals in the intact nervous system of Hirudo verbana (the medicinal leech), an animal with a minimally complex nervous system (10000 neurons distributed in packets along a nerve cord) that exhibits a diverse array of behaviors. To introduce artificial activity patterns, I developed a photothermal activation system in which a highly focused laser is used to irradiate carbon microparticles in contact with target neurons. The resulting local temperature increase generates an electrical current that forces the target neuron to fire neural signals, thereby providing a unique neural input mechanism. These neural signals can potentially be used to alter behavioral choice or generate specific behavioral output, and can be used endogenously in many animal models. I also describe new tools developed to expand the application of this method. In complement to this input system, I describe a new method of analyzing neural output signals involved in long-range coordination of behaviors. Leech behavioral signals are propagated between neural packets as electrical pulses in the nerve connective, a bundle of

  6. Optogenetics in Silicon: A Neural Processor for Predicting Optically Active Neural Networks.

    Science.gov (United States)

    Junwen Luo; Nikolic, Konstantin; Evans, Benjamin D; Na Dong; Xiaohan Sun; Andras, Peter; Yakovlev, Alex; Degenaar, Patrick

    2017-02-01

    We present a reconfigurable neural processor for real-time simulation and prediction of opto-neural behaviour. We combined a detailed Hodgkin-Huxley CA3 neuron integrated with a four-state Channelrhodopsin-2 (ChR2) model into reconfigurable silicon hardware. Our architecture consists of a Field Programmable Gated Array (FPGA) with a custom-built computing data-path, a separate data management system and a memory approach based router. Advancements over previous work include the incorporation of short and long-term calcium and light-dependent ion channels in reconfigurable hardware. Also, the developed processor is computationally efficient, requiring only 0.03 ms processing time per sub-frame for a single neuron and 9.7 ms for a fully connected network of 500 neurons with a given FPGA frequency of 56.7 MHz. It can therefore be utilized for exploration of closed loop processing and tuning of biologically realistic optogenetic circuitry.

  7. Restoring Behavior via Inverse Neurocontroller in a Lesioned Cortical Spiking Model Driving a Virtual Arm.

    Science.gov (United States)

    Dura-Bernal, Salvador; Li, Kan; Neymotin, Samuel A; Francis, Joseph T; Principe, Jose C; Lytton, William W

    2016-01-01

    Neural stimulation can be used as a tool to elicit natural sensations or behaviors by modulating neural activity. This can be potentially used to mitigate the damage of brain lesions or neural disorders. However, in order to obtain the optimal stimulation sequences, it is necessary to develop neural control methods, for example by constructing an inverse model of the target system. For real brains, this can be very challenging, and often unfeasible, as it requires repeatedly stimulating the neural system to obtain enough probing data, and depends on an unwarranted assumption of stationarity. By contrast, detailed brain simulations may provide an alternative testbed for understanding the interactions between ongoing neural activity and external stimulation. Unlike real brains, the artificial system can be probed extensively and precisely, and detailed output information is readily available. Here we employed a spiking network model of sensorimotor cortex trained to drive a realistic virtual musculoskeletal arm to reach a target. The network was then perturbed, in order to simulate a lesion, by either silencing neurons or removing synaptic connections. All lesions led to significant behvaioral impairments during the reaching task. The remaining cells were then systematically probed with a set of single and multiple-cell stimulations, and results were used to build an inverse model of the neural system. The inverse model was constructed using a kernel adaptive filtering method, and was used to predict the neural stimulation pattern required to recover the pre-lesion neural activity. Applying the derived neurostimulation to the lesioned network improved the reaching behavior performance. This work proposes a novel neurocontrol method, and provides theoretical groundwork on the use biomimetic brain models to develop and evaluate neurocontrollers that restore the function of damaged brain regions and the corresponding motor behaviors.

  8. Neural activity during health messaging predicts reductions in smoking above and beyond self-report.

    Science.gov (United States)

    Falk, Emily B; Berkman, Elliot T; Whalen, Danielle; Lieberman, Matthew D

    2011-03-01

    The current study tested whether neural activity in response to messages designed to help smokers quit could predict smoking reduction, above and beyond self-report. Using neural activity in an a priori region of interest (a subregion of medial prefrontal cortex [MPFC]), in response to ads designed to help smokers quit smoking, we prospectively predicted reductions in smoking in a community sample of smokers (N = 28) who were attempting to quit smoking. Smoking was assessed via expired carbon monoxide (CO; a biological measure of recent smoking) at baseline and 1 month following exposure to professionally developed quitting ads. A positive relationship was observed between activity in the MPFC region of interest and successful quitting (increased activity in MPFC was associated with a greater decrease in expired CO). The addition of neural activity to a model predicting changes in CO from self-reported intentions, self-efficacy, and ability to relate to the messages significantly improved model fit, doubling the variance explained (R²self-report = .15, R²self-report + neural activity = .35, R²change = .20). Neural activity is a useful complement to existing self-report measures. In this investigation, we extend prior work predicting behavior change based on neural activity in response to persuasive media to an important health domain and discuss potential psychological interpretations of the brain-behavior link. Our results support a novel use of neuroimaging technology for understanding the psychology of behavior change and facilitating health promotion. (c) 2011 APA, all rights reserved

  9. The Effects of Guanfacine and Phenylephrine on a Spiking Neuron Model of Working Memory.

    Science.gov (United States)

    Duggins, Peter; Stewart, Terrence C; Choo, Xuan; Eliasmith, Chris

    2017-01-01

    We use a spiking neural network model of working memory (WM) capable of performing the spatial delayed response task (DRT) to investigate two drugs that affect WM: guanfacine (GFC) and phenylephrine (PHE). In this model, the loss of information over time results from changes in the spiking neural activity through recurrent connections. We reproduce the standard forgetting curve and then show that this curve changes in the presence of GFC and PHE, whose application is simulated by manipulating functional, neural, and biophysical properties of the model. In particular, applying GFC causes increased activity in neurons that are sensitive to the information currently being remembered, while applying PHE leads to decreased activity in these same neurons. Interestingly, these differential effects emerge from network-level interactions because GFC and PHE affect all neurons equally. We compare our model to both electrophysiological data from neurons in monkey dorsolateral prefrontal cortex and to behavioral evidence from monkeys performing the DRT. Copyright © 2016 Cognitive Science Society, Inc.

  10. Neural Activity Patterns in the Human Brain Reflect Tactile Stickiness Perception

    Science.gov (United States)

    Kim, Junsuk; Yeon, Jiwon; Ryu, Jaekyun; Park, Jang-Yeon; Chung, Soon-Cheol; Kim, Sung-Phil

    2017-01-01

    Our previous human fMRI study found brain activations correlated with tactile stickiness perception using the uni-variate general linear model (GLM) (Yeon et al., 2017). Here, we conducted an in-depth investigation on neural correlates of sticky sensations by employing a multivoxel pattern analysis (MVPA) on the same dataset. In particular, we statistically compared multi-variate neural activities in response to the three groups of sticky stimuli: A supra-threshold group including a set of sticky stimuli that evoked vivid sticky perception; an infra-threshold group including another set of sticky stimuli that barely evoked sticky perception; and a sham group including acrylic stimuli with no physically sticky property. Searchlight MVPAs were performed to search for local activity patterns carrying neural information of stickiness perception. Similar to the uni-variate GLM results, significant multi-variate neural activity patterns were identified in postcentral gyrus, subcortical (basal ganglia and thalamus), and insula areas (insula and adjacent areas). Moreover, MVPAs revealed that activity patterns in posterior parietal cortex discriminated the perceptual intensities of stickiness, which was not present in the uni-variate analysis. Next, we applied a principal component analysis (PCA) to the voxel response patterns within identified clusters so as to find low-dimensional neural representations of stickiness intensities. Follow-up clustering analyses clearly showed separate neural grouping configurations between the Supra- and Infra-threshold groups. Interestingly, this neural categorization was in line with the perceptual grouping pattern obtained from the psychophysical data. Our findings thus suggest that different stickiness intensities would elicit distinct neural activity patterns in the human brain and may provide a neural basis for the perception and categorization of tactile stickiness. PMID:28936171

  11. Anisotropy of ongoing neural activity in the primate visual cortex

    Directory of Open Access Journals (Sweden)

    Maier A

    2014-09-01

    Full Text Available Alexander Maier,1 Michele A Cox,1 Kacie Dougherty,1 Brandon Moore,1 David A Leopold2 1Department of Psychology, College of Arts and Science, Vanderbilt University, Nashville, TN, USA; 2Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, National Institute of Health, Bethesda, MD, USA Abstract: The mammalian neocortex features distinct anatomical variation in its tangential and radial extents. This review consolidates previously published findings from our group in order to compare and contrast the spatial profile of neural activity coherence across these distinct cortical dimensions. We focus on studies of ongoing local field potential (LFP data obtained simultaneously from multiple sites in the primary visual cortex in two types of experiments in which electrode contacts were spaced either along the cortical surface or at different laminar positions. These studies demonstrate that across both dimensions the coherence of ongoing LFP fluctuations diminishes as a function of interelectrode distance, although the nature and spatial scale of this falloff is very different. Along the cortical surface, the overall LFP coherence declines gradually and continuously away from a given position. In contrast, across the cortical layers, LFP coherence is discontinuous and compartmentalized as a function of depth. Specifically, regions of high LFP coherence fall into discrete superficial and deep laminar zones, with an abrupt discontinuity between the granular and infragranular layers. This spatial pattern of ongoing LFP coherence is similar when animals are at rest and when they are engaged in a behavioral task. These results point to the existence of partially segregated laminar zones of cortical processing that extend tangentially within the laminar compartments and are thus oriented orthogonal to the cortical columns. We interpret these electrophysiological observations in light of the known anatomical organization of

  12. Kv1 channels control spike threshold dynamics and spike timing in cortical pyramidal neurones.

    Science.gov (United States)

    Higgs, Matthew H; Spain, William J

    2011-11-01

    Previous studies showed that cortical pyramidal neurones (PNs) have a dynamic spike threshold that functions as a high-pass filter, enhancing spike timing in response to high-frequency input. While it is commonly assumed that Na(+) channel inactivation is the primary mechanism of threshold accommodation, the possible role of K(+) channel activation in fast threshold changes has not been well characterized. The present study tested the hypothesis that low-voltage activated Kv1 channels affect threshold dynamics in layer 2-3 PNs, using α-dendrotoxin (DTX) or 4-aminopyridine (4-AP) to block these conductances. We found that Kv1 blockade reduced the dynamic changes of spike threshold in response to a variety of stimuli, including stimulus-evoked synaptic input, current steps and ramps of varied duration, and noise. Analysis of the responses to noise showed that Kv1 channels increased the coherence of spike output with high-frequency components of the stimulus. A simple model demonstrates that a dynamic spike threshold can account for this effect. Our results show that the Kv1 conductance is a major mechanism that contributes to the dynamic spike threshold and precise spike timing of cortical PNs.

  13. Characterization of Early Cortical Neural Network Development in Multiwell Microelectrode Array Plates

    Science.gov (United States)

    We examined the development of neural network activity using microelectrode array (MEA) recordings made in multi-well MEA plates (mwMEAs) over the first 12 days in vitro (DIV). In primary cortical cultures made from postnatal rats, action potential spiking activity was essentiall...

  14. Response Features Determining Spike Times

    Directory of Open Access Journals (Sweden)

    Barry J. Richmond

    1999-01-01

    redundant with that carried by the coarse structure. Thus, the existence of precisely timed spike patterns carrying stimulus-related information does not imply control of spike timing at precise time scales.

  15. Parasympathetic neural activity accounts for the lowering of exercise heart rate at high altitude

    DEFF Research Database (Denmark)

    Boushel, Robert Christopher; Calbet, J A; Rådegran, G

    2001-01-01

    In chronic hypoxia, both heart rate (HR) and cardiac output (Q) are reduced during exercise. The role of parasympathetic neural activity in lowering HR is unresolved, and its influence on Q and oxygen transport at high altitude has never been studied.......In chronic hypoxia, both heart rate (HR) and cardiac output (Q) are reduced during exercise. The role of parasympathetic neural activity in lowering HR is unresolved, and its influence on Q and oxygen transport at high altitude has never been studied....

  16. Attenuation of β-Amyloid Deposition and Neurotoxicity by Chemogenetic Modulation of Neural Activity.

    Science.gov (United States)

    Yuan, Peng; Grutzendler, Jaime

    2016-01-13

    Aberrant neural hyperactivity has been observed in early stages of Alzheimer's disease (AD) and may be a driving force in the progression of amyloid pathology. Evidence for this includes the findings that neural activity may modulate β-amyloid (Aβ) peptide secretion and experimental stimulation of neural activity can increase amyloid deposition. However, whether long-term attenuation of neural activity prevents the buildup of amyloid plaques and associated neural pathologies remains unknown. Using viral-mediated delivery of designer receptors exclusively activated by designer drugs (DREADDs), we show in two AD-like mouse models that chronic intermittent increases or reductions of activity have opposite effects on Aβ deposition. Neural activity reduction markedly decreases Aβ aggregation in regions containing axons or dendrites of DREADD-expressing neurons, suggesting the involvement of synaptic and nonsynaptic Aβ release mechanisms. Importantly, activity attenuation is associated with a reduction in axonal dystrophy and synaptic loss around amyloid plaques. Thus, modulation of neural activity could constitute a potential therapeutic strategy for ameliorating amyloid-induced pathology in AD. A novel chemogenetic approach to upregulate and downregulate neuronal activity in Alzheimer's disease (AD) mice was implemented. This led to the first demonstration that chronic intermittent attenuation of neuronal activity in vivo significantly reduces amyloid deposition. The study also demonstrates that modulation of β-amyloid (Aβ) release can occur at both axonal and dendritic fields, suggesting the involvement of synaptic and nonsynaptic Aβ release mechanisms. Activity reductions also led to attenuation of the synaptic pathology associated with amyloid plaques. Therefore, chronic attenuation of neuronal activity could constitute a novel therapeutic approach for AD. Copyright © 2016 the authors 0270-6474/16/360632-10$15.00/0.

  17. The race to learn: spike timing and STDP can coordinate learning and recall in CA3.

    Science.gov (United States)

    Nolan, Christopher R; Wyeth, Gordon; Milford, Michael; Wiles, Janet

    2011-06-01

    The CA3 region of the hippocampus has long been proposed as an autoassociative network performing pattern completion on known inputs. The dentate gyrus (DG) region is often proposed as a network performing the complementary function of pattern separation. Neural models of pattern completion and separation generally designate explicit learning phases to encode new information and assume an ideal fixed threshold at which to stop learning new patterns and begin recalling known patterns. Memory systems are significantly more complex in practice, with the degree of memory recall depending on context-specific goals. Here, we present our spike-timing separation and completion (STSC) model of the entorhinal cortex (EC), DG, and CA3 network, ascribing to each region a role similar to that in existing models but adding a temporal dimension by using a spiking neural network. Simulation results demonstrate that (a) spike-timing dependent plasticity in the EC-CA3 synapses provides a pattern completion ability without recurrent CA3 connections, (b) the race between activation of CA3 cells via EC-CA3 synapses and activation of the same cells via DG-CA3 synapses distinguishes novel from known inputs, and (c) modulation of the EC-CA3 synapses adjusts the learned versus test input similarity required to evoke a direct CA3 response prior to any DG activity, thereby adjusting the pattern completion threshold. These mechanisms suggest that spike timing can arbitrate between learning and recall based on the novelty of each individual input, ensuring control of the learn-recall decision resides in the same subsystem as the learned memories themselves. The proposed modulatory signal does not override this decision but biases the system toward either learning or recall. The model provides an explanation for empirical observations that a reduction in novelty produces a corresponding reduction in the latency of responses in CA3 and CA1. Copyright © 2010 Wiley-Liss, Inc.

  18. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  19. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Dejan Pecevski

    2011-12-01

    Full Text Available An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away" and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  20. Dynamic balance of excitation and inhibition rapidly modulates spike probability and precision in feed-forward hippocampal circuits.

    Science.gov (United States)

    Wahlstrom-Helgren, Sarah; Klyachko, Vitaly A

    2016-12-01

    Feed-forward inhibitory (FFI) circuits are important for many information-processing functions. FFI circuit operations critically depend on the balance and timing between the excitatory and inhibitory components, which undergo rapid dynamic changes during neural activity due to short-term plasticity (STP) of both components. How dynamic changes in excitation/inhibition (E/I) balance during spike trains influence FFI circuit operations remains poorly understood. In the current study we examined the role of STP in the FFI circuit functions in the mouse hippocampus. Using a coincidence detection paradigm with simultaneous activation of two Schaffer collateral inputs, we found that the spiking probability in the target CA1 neuron was increased while spike precision concomitantly decreased during high-frequency bursts compared with a single spike. Blocking inhibitory synaptic transmission revealed that dynamics of inhibition predominately modulates the spike precision but not the changes in spiking probability, whereas the latter is modulated by the dynamics of excitation. Further analyses combining whole cell recordings and simulations of the FFI circuit suggested that dynamics of the inhibitory circuit component may influence spiking behavior during bursts by broadening the width of excitatory postsynaptic responses and that the strength of this modulation depends on the basal E/I ratio. We verified these predictions using a mouse model of fragile X syndrome, which has an elevated E/I ratio, and found a strongly reduced modulation of postsynaptic response width during bursts. Our results suggest that changes in the dynamics of excitatory and inhibitory circuit components due to STP play important yet distinct roles in modulating the properties of FFI circuits. Copyright © 2016 the American Physiological Society.

  1. Dynamics of a modified Hindmarsh-Rose neural model with random perturbations: Moment analysis and firing activities

    Science.gov (United States)

    Mondal, Argha; Upadhyay, Ranjit Kumar

    2017-11-01

    In this paper, an attempt has been made to understand the activity of mean membrane voltage and subsidiary system variables with moment equations (i.e., mean, variance and covariance's) under noisy environment. We consider a biophysically plausible modified Hindmarsh-Rose (H-R) neural system injected by an applied current exhibiting spiking-bursting phenomenon. The effects of predominant parameters on the dynamical behavior of a modified H-R system are investigated. Numerically, it exhibits period-doubling, period halving bifurcation and chaos phenomena. Further, a nonlinear system has been analyzed for the first and second order moments with additive stochastic perturbations. It has been solved using fourth order Runge-Kutta method and noisy systems by Euler's scheme. It has been demonstrated that the firing properties of neurons to evoke an action potential in a certain parameter space of the large exact systems can be estimated using an approximated model. Strong stimulation can cause a change in increase or decrease of the firing patterns. Corresponding to a fixed set of parameter values, the firing behavior and dynamical differences of the collective variables of a large, exact and approximated systems are investigated.

  2. To compare the effect of Active Neural Mobilization during Intermittent Lumbar Traction and Intermittent Lumbar Traction followed by Active Neural Mobilization in cases of Lumbar Radiculopathy

    Directory of Open Access Journals (Sweden)

    Jaywant Nagulkar

    2016-08-01

    Full Text Available To compare the effectiveness of Active neural mobilization (ANM during intermittent lumbar traction (ILT and intermittent lumbar traction followed by active neural mobilization treatment in patients of low back pain (LBP with radiculopathy.. To study the effect of ANM during ILT and ILT followed by ANM in patients of LBP with radiculopathy on VAS scale, P1 angle of SLR, P2 angle of SLR and Oswestry disability index(ODI. To compare the effect of ANM during ILT and ILT followed by ANM in patients of LBP with radiculopathy on visual analog scale (VAS scale, P1 angle of SLR, P2 angle of SLR and Oswestry disability index. In this study 107 patients of LBP with radiculopathy were randomly assigned into two different groups. Group A containing 54 patients received active neural mobilization during intermittent lumber traction and group B received intermittent lumber traction followed by active neural mobilization. The data on all the outcome measures were recorded on day 0 pre-treatment and on 10th day post treatment. Data were analyzed using statistical software Intercorted STATA VERSION 9.0. Patients in both the groups showed improvement in all 4 outcome measures as compared to baseline assessment values. Patients treated in group A showed more improvement as compared to group B. This study concluded that ANM during ILT gives more relief and yields better responses in patients of LBP with radiculopathy and may help person to resume his daily activities.

  3. Stochastic inference with spiking neurons in the high-conductance state

    Science.gov (United States)

    Petrovici, Mihai A.; Bill, Johannes; Bytschok, Ilja; Schemmel, Johannes; Meier, Karlheinz

    2016-10-01

    The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro. Based on a propagation of the membrane autocorrelation across spike bursts, we provide an analytical derivation of the neural activation function that holds for a large parameter space, including the high-conductance state. On this basis, we show how an ensemble of leaky integrate-and-fire neurons with conductance-based synapses embedded in a spiking environment can attain the correct firing statistics for sampling from a well-defined target distribution. For recurrent networks, we examine convergence toward stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This points to a new computational role of high-conductance states and establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level.

  4. Removal of microcystin-LR from spiked water using either activated carbon or anthracite as filter material.

    Science.gov (United States)

    Drogui, Patrick; Daghrir, Rimeh; Simard, Marie-Christine; Sauvageau, Christine; Blais, Jean François

    2012-01-01

    The occurrence of cyanobacterial toxins (blue-green algae) in drinking water sources is a big concern for human health. Removal of microcystin-LR (MC-LR) from drinking water was evaluated at the laboratory pilot scale using either granular activated carbon (GAC) or powdered activated carbon (PAC) and compared with the treatment using anthracite as filter material. Virgin GAC was more effective at removing MC-LR (initial concentration ranging from 9 to 47 microg L(-1)) to reach the World Health Organization recommended level (1.0 microg L(-1)). When the GAC filter was colonized by bacteria, the filter became less effective at removing MC-LR owing to competitive reactions occurring between protein adsorption (released by bacteria) and MC-LR adsorption. Using PAC, the concentration of MC-LR decreased from 22 to 3 microg L(-1) (removal of 86% of MC-LR) by the addition of 100 mg PAC L(-1).

  5. Task-dependent modulation of oscillatory neural activity during movements

    DEFF Research Database (Denmark)

    Herz, D. M.; Christensen, M. S.; Reck, C.

    2011-01-01

    -dependent modulation of frequency coupling within this network. To this end we recorded 122-multichannel EEG in 13 healthy subjects while they performed three simple motor tasks. EEG data source modeling using individual MR images was carried out with a multiple source beamformer approach. A bilateral motor network...... for inferring on architecture and coupling parameters of neural networks....

  6. Voltage Estimation in Active Distribution Grids Using Neural Networks

    DEFF Research Database (Denmark)

    Pertl, Michael; Heussen, Kai; Gehrke, Oliver

    2016-01-01

    the observability of distribution systems has to be improved. To increase the situational awareness of the power system operator data driven methods can be employed. These methods benefit from newly available data sources such as smart meters. This paper presents a voltage estimation method based on neural networks...

  7. Spike-driven synaptic plasticity: theory, simulation, VLSI implementation.

    Science.gov (United States)

    Fusi, S; Annunziato, M; Badoni, D; Salamon, A; Amit, D J

    2000-10-01

    We present a model for spike-driven dynamics of a plastic synapse, suited for aVLSI implementation. The synaptic device behaves as a capacitor on short timescales and preserves the memory of two stable states (efficacies) on long timescales. The transitions (LTP/LTD) are stochastic because both the number and the distribution of neural spikes in any finite (stimulation) interval fluctuate, even at fixed pre- and postsynaptic spike rates. The dynamics of the single synapse is studied analytically by extending the solution to a classic problem in queuing theory (Takacs process). The model of the synapse is implemented in aVLSI and consists of only 18 transistors. It is also directly simulated. The simulations indicate that LTP/LTD probabilities versus rates are robust to fluctuations of the electronic parameters in a wide range of rates. The solutions for these probabilities are in very good agreement with both the simulations and measurements. Moreover, the probabilities are readily manipulable by variations of the chip's parameters, even in ranges where they are very small. The tests of the electronic device cover the range from spontaneous activity (3-4 Hz) to stimulus-driven rates (50 Hz). Low transition probabilities can be maintained in all ranges, even though the intrinsic time constants of the device are short (approximately 100 ms). Synaptic transitions are triggered by elevated presynaptic rates: for low presynaptic rates, there are essentially no transitions. The synaptic device can preserve its memory for years in the absence of stimulation. Stochasticity of learning is a result of the variability of interspike intervals; noise is a feature of the distributed dynamics of the network. The fact that the synapse is binary on long timescales solves the stability problem of synaptic efficacies in the absence of stimulation. Yet stochastic learning theory ensures that it does not affect the collective behavior of the network, if the transition probabilities are

  8. Amniotic fluid paraoxonase-1 activity, thyroid hormone concentration and oxidant status in neural tube defects.

    Science.gov (United States)

    Sak, Sibel; Agacayak, Elif; Tunc, Senem Yaman; Icen, Mehmet Sait; Findik, Fatih Mehmet; Sak, Muhammet Erdal; Yalinkaya, Ahmet; Gul, Talip

    2016-09-01

    The aim of this study was to investigate the potential association between neural tube defects and paraoxonase-1 activity in amniotic fluid. We studied total oxidant status, total antioxidant capacity, paraoxonase-1 activity and thyroid hormone amniotic fluid concentration in fetuses with neural tube defects. The present study was performed at the Department of Obstetrics and Gynaecology and the Department of Clinical Biochemistry of Dicle University between September 2011 and June 2013. The study group included 37 amniotic fluid samples from pregnant women (16-20 weeks of gestation) with fetuses affected by neural tube defects. The control group consisted of 36 pregnant women who were diagnosed with a high-risk pregnancy according to first or second trimester aneuploidy screening and were later confirmed on amniocentesis to have genetically normal fetuses. Amniotic fluid paraoxonase-1 activity and total oxidant status were significantly higher (P = 0.023, P = 0.029, respectively) whereas free T4 was significantly lower (P = 0.022) in fetuses with neural tube defects compared with control subjects. In fetuses with neural tube defects, amniotic fluid paraoxonase-1 activity correlated positively with total oxidant status (r = 0.424**, P = 0.010), and amniotic fluid total antioxidant capacity correlated positively with free t4 (r = 0.381*, P = 0.022). This is the first study in the literature to show an association between paraoxonase-1 activity and thyroid hormone concentration and neural tube defects. © 2016 Japan Society of Obstetrics and Gynecology.

  9. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.

    Science.gov (United States)

    Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B

    2016-01-01

    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware

  10. Tracing 'driver' versus 'modulator' information flow throughout large-scale, task-related neural circuitry.

    Science.gov (United States)

    Hermer-Vazquez, Linda

    2008-04-01

    PRIMARY OBJECTIVE: To determine the relative uses of neural action potential ('spike') data versus local field potentials (LFPs) for modeling information flow through complex brain networks. HYPOTHESIS: The common use of LFP data, which are continuous and therefore more mathematically suited for spectral information-flow modeling techniques such as Granger causality analysis, can lead to spurious inferences about whether a given brain area 'drives' the spiking in a downstream area. EXPERIMENT: We recorded spikes and LFPs from the forelimb motor cortex (M1) and the magnocellular red nucleus (mRN), which receives axon collaterals from M1 projection cells onto its distal dendrites, but not onto its perisomatic regions, as rats performed a skilled reaching task. RESULTS AND IMPLICATIONS: As predicted, Granger causality analysis on the LFPs-which are mainly composed of vector-summed dendritic currents-produced results that if conventionally interpreted would suggest that the M1 cells drove spike firing in the mRN, whereas analyses of spiking in the two recorded regions revealed no significant correlations. These results suggest that mathematical models of information flow should treat the sampled dendritic activity as more likely to reflect intrinsic dendritic and input-related processing in neural networks, whereas spikes are more likely to provide information about the output of neural network processing.

  11. Rapid adaptive remote focusing microscope for sensing of volumetric neural activity.

    Science.gov (United States)

    Žurauskas, Mantas; Barnstedt, Oliver; Frade-Rodriguez, Maria; Waddell, Scott; Booth, Martin J

    2017-10-01

    The ability to record neural activity in the brain of a living organism at cellular resolution is of great importance for defining the neural circuit mechanisms that direct behavior. Here we present an adaptive two-photon microscope optimized for extraction of neural signals over volumes in intact Drosophila brains, even in the presence of specimen motion. High speed volume imaging was made possible through reduction of spatial resolution while maintaining the light collection efficiency of a high resolution, high numerical aperture microscope. This enabled simultaneous recording of odor-evoked calcium transients in a defined volume of mushroom body Kenyon cell bodies in a live fruit fly.

  12. Spatiotemporal imaging of glutamate-induced biophotonic activities and transmission in neural circuits.

    Directory of Open Access Journals (Sweden)

    Rendong Tang

    Full Text Available The processing of neural information in neural circuits plays key roles in neural functions. Biophotons, also called ultra-weak photon emissions (UPE, may play potential roles in neural signal transmission, contributing to the understanding of the high functions of nervous system such as vision, learning and memory, cognition and consciousness. However, the experimental analysis of biophotonic activities (emissions in neural circuits has been hampered due to technical limitations. Here by developing and optimizing an in vitro biophoton imaging method, we characterize the spatiotemporal biophotonic activities and transmission in mouse brain slices. We show that the long-lasting application of glutamate to coronal brain slices produces a gradual and significant increase of biophotonic activities and achieves the maximal effect within approximately 90 min, which then lasts for a relatively long time (>200 min. The initiation and/or maintenance of biophotonic activities by glutamate can be significantly blocked by oxygen and glucose deprivation, together with the application of a cytochrome c oxidase inhibitor (sodium azide, but only partly by an action potential inhibitor (TTX, an anesthetic (procaine, or the removal of intracellular and extracellular Ca(2+. We also show that the detected biophotonic activities in the corpus callosum and thalamus in sagittal brain slices mostly originate from axons or axonal terminals of cortical projection neurons, and that the hyperphosphorylation of microtubule-associated protein tau leads to a significant decrease of biophotonic activities in these two areas. Furthermore, the application of glutamate in the hippocampal dentate gyrus results in increased biophotonic activities in its intrahippocampal projection areas. These results suggest that the glutamate-induced biophotonic activities reflect biophotonic transmission along the axons and in neural circuits, which may be a new mechanism for the processing of

  13. A Neuron Model for FPGA Spiking Neuronal Network Implementation

    Directory of Open Access Journals (Sweden)

    BONTEANU, G.

    2011-11-01

    Full Text Available We propose a neuron model, able to reproduce the basic elements of the neuronal dynamics, optimized for digital implementation of Spiking Neural Networks. Its architecture is structured in two major blocks, a datapath and a control unit. The datapath consists of a membrane potential circuit, which emulates the neuronal dynamics at the soma level, and a synaptic circuit used to update the synaptic weight according to the spike timing dependent plasticity (STDP mechanism. The proposed model is implemented into a Cyclone II-Altera FPGA device. Our results indicate the neuron model can be used to build up 1K Spiking Neural Networks on reconfigurable logic suport, to explore various network topologies.

  14. Active Vibration Control of the Smart Plate Using Artificial Neural Network Controller

    Directory of Open Access Journals (Sweden)

    Mohit

    2015-01-01

    Full Text Available The active vibration control (AVC of a rectangular plate with single input and single output approach is investigated using artificial neural network. The cantilever plate of finite length, breadth, and thickness having piezoelectric patches as sensors/actuators fixed at the upper and lower surface of the metal plate is considered for examination. The finite element model of the cantilever plate is utilized to formulate the whole strategy. The compact RIO and MATLAB simulation software are exercised to get the appropriate results. The cantilever plate is subjected to impulse input and uniform white noise disturbance. The neural network is trained offline and tuned with LQR controller. The various training algorithms to tune the neural network are exercised. The best efficient algorithm is finally considered to tune the neural network controller designed for active vibration control of the smart plate.

  15. Analysis of electromyographic activity in spastic biceps brachii muscle following neural mobilization.

    Science.gov (United States)

    Castilho, Jéssica; Ferreira, Luiz Alfredo Braun; Pereira, Wagner Menna; Neto, Hugo Pasini; Morelli, José Geraldo da Silva; Brandalize, Danielle; Kerppers, Ivo Ilvan; Oliveira, Claudia Santos

    2012-07-01

    Hypertonia is prevalent in anti-gravity muscles, such as the biceps brachii. Neural mobilization is one of the techniques currently used to reduce spasticity. The aim of the present study was to assess electromyographic (EMG) activity in spastic biceps brachii muscles before and after neural mobilization of the upper limb contralateral to the hemiplegia. Repeated pre-test and post-test EMG measurements were performed on six stroke victims with grade 1 or 2 spasticity (Modified Ashworth Scale). The Upper Limb Neurodynamic Test (ULNT1) was the mobilization technique employed. After neural mobilization contralateral to the lesion, electromyographic activity in the biceps brachii decreased by 17% and