WorldWideScience

Sample records for network embedding spike

  1. Modular Neural Tile Architecture for Compact Embedded Hardware Spiking Neural Network

    NARCIS (Netherlands)

    Pande, Sandeep; Morgan, Fearghal; Cawley, Seamus; Bruintjes, Tom; Smit, Gerardus Johannes Maria; McGinley, Brian; Carrillo, Snaider; Harkin, Jim; McDaid, Liam

    2013-01-01

    Biologically-inspired packet switched network on chip (NoC) based hardware spiking neural network (SNN) architectures have been proposed as an embedded computing platform for classification, estimation and control applications. Storage of large synaptic connectivity (SNN topology) information in

  2. The Ripple Pond: Enabling Spiking Networks to See

    Directory of Open Access Journals (Sweden)

    Saeed eAfshar

    2013-11-01

    Full Text Available We present the biologically inspired Ripple Pond Network (RPN, a simply connected spiking neural network which performs a transformation converting two dimensional images to one dimensional temporal patterns suitable for recognition by temporal coding learning and memory networks. The RPN has been developed as a hardware solution linking previously implemented neuromorphic vision and memory structures such as frameless vision sensors and neuromorphic temporal coding spiking neural networks. Working together such systems are potentially capable of delivering end-to-end high-speed, low-power and low-resolution recognition for mobile and autonomous applications where slow, highly sophisticated and power hungry signal processing solutions are ineffective. Key aspects in the proposed approach include utilising the spatial properties of physically embedded neural networks and propagating waves of activity therein for information processing, using dimensional collapse of imagery information into amenable temporal patterns and the use of asynchronous frames for information binding.

  3. The ripple pond: enabling spiking networks to see.

    Science.gov (United States)

    Afshar, Saeed; Cohen, Gregory K; Wang, Runchun M; Van Schaik, André; Tapson, Jonathan; Lehmann, Torsten; Hamilton, Tara J

    2013-01-01

    We present the biologically inspired Ripple Pond Network (RPN), a simply connected spiking neural network which performs a transformation converting two dimensional images to one dimensional temporal patterns (TP) suitable for recognition by temporal coding learning and memory networks. The RPN has been developed as a hardware solution linking previously implemented neuromorphic vision and memory structures such as frameless vision sensors and neuromorphic temporal coding spiking neural networks. Working together such systems are potentially capable of delivering end-to-end high-speed, low-power and low-resolution recognition for mobile and autonomous applications where slow, highly sophisticated and power hungry signal processing solutions are ineffective. Key aspects in the proposed approach include utilizing the spatial properties of physically embedded neural networks and propagating waves of activity therein for information processing, using dimensional collapse of imagery information into amenable TP and the use of asynchronous frames for information binding.

  4. Deep Spiking Networks

    NARCIS (Netherlands)

    O'Connor, P.; Welling, M.

    2016-01-01

    We introduce an algorithm to do backpropagation on a spiking network. Our network is "spiking" in the sense that our neurons accumulate their activation into a potential over time, and only send out a signal (a "spike") when this potential crosses a threshold and the neuron is reset. Neurons only

  5. Spiking neural network for recognizing spatiotemporal sequences of spikes

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.

    2004-01-01

    Sensory neurons in many brain areas spike with precise timing to stimuli with temporal structures, and encode temporally complex stimuli into spatiotemporal spikes. How the downstream neurons read out such neural code is an important unsolved problem. In this paper, we describe a decoding scheme using a spiking recurrent neural network. The network consists of excitatory neurons that form a synfire chain, and two globally inhibitory interneurons of different types that provide delayed feedforward and fast feedback inhibition, respectively. The network signals recognition of a specific spatiotemporal sequence when the last excitatory neuron down the synfire chain spikes, which happens if and only if that sequence was present in the input spike stream. The recognition scheme is invariant to variations in the intervals between input spikes within some range. The computation of the network can be mapped into that of a finite state machine. Our network provides a simple way to decode spatiotemporal spikes with diverse types of neurons

  6. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Zenke, Friedemann; Ganguli, Surya

    2018-04-13

    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

  7. A Multiple-Plasticity Spiking Neural Network Embedded in a Closed-Loop Control System to Model Cerebellar Pathologies.

    Science.gov (United States)

    Geminiani, Alice; Casellato, Claudia; Antonietti, Alberto; D'Angelo, Egidio; Pedrocchi, Alessandra

    2018-06-01

    The cerebellum plays a crucial role in sensorimotor control and cerebellar disorders compromise adaptation and learning of motor responses. However, the link between alterations at network level and cerebellar dysfunction is still unclear. In principle, this understanding would benefit of the development of an artificial system embedding the salient neuronal and plastic properties of the cerebellum and operating in closed-loop. To this aim, we have exploited a realistic spiking computational model of the cerebellum to analyze the network correlates of cerebellar impairment. The model was modified to reproduce three different damages of the cerebellar cortex: (i) a loss of the main output neurons (Purkinje Cells), (ii) a lesion to the main cerebellar afferents (Mossy Fibers), and (iii) a damage to a major mechanism of synaptic plasticity (Long Term Depression). The modified network models were challenged with an Eye-Blink Classical Conditioning test, a standard learning paradigm used to evaluate cerebellar impairment, in which the outcome was compared to reference results obtained in human or animal experiments. In all cases, the model reproduced the partial and delayed conditioning typical of the pathologies, indicating that an intact cerebellar cortex functionality is required to accelerate learning by transferring acquired information to the cerebellar nuclei. Interestingly, depending on the type of lesion, the redistribution of synaptic plasticity and response timing varied greatly generating specific adaptation patterns. Thus, not only the present work extends the generalization capabilities of the cerebellar spiking model to pathological cases, but also predicts how changes at the neuronal level are distributed across the network, making it usable to infer cerebellar circuit alterations occurring in cerebellar pathologies.

  8. A real-time spike sorting method based on the embedded GPU.

    Science.gov (United States)

    Zelan Yang; Kedi Xu; Xiang Tian; Shaomin Zhang; Xiaoxiang Zheng

    2017-07-01

    Microelectrode arrays with hundreds of channels have been widely used to acquire neuron population signals in neuroscience studies. Online spike sorting is becoming one of the most important challenges for high-throughput neural signal acquisition systems. Graphic processing unit (GPU) with high parallel computing capability might provide an alternative solution for increasing real-time computational demands on spike sorting. This study reported a method of real-time spike sorting through computing unified device architecture (CUDA) which was implemented on an embedded GPU (NVIDIA JETSON Tegra K1, TK1). The sorting approach is based on the principal component analysis (PCA) and K-means. By analyzing the parallelism of each process, the method was further optimized in the thread memory model of GPU. Our results showed that the GPU-based classifier on TK1 is 37.92 times faster than the MATLAB-based classifier on PC while their accuracies were the same with each other. The high-performance computing features of embedded GPU demonstrated in our studies suggested that the embedded GPU provide a promising platform for the real-time neural signal processing.

  9. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    Science.gov (United States)

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks

  10. Supervised Learning Based on Temporal Coding in Spiking Neural Networks.

    Science.gov (United States)

    Mostafa, Hesham

    2017-08-01

    Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.

  11. Solving constraint satisfaction problems with networks of spiking neurons

    Directory of Open Access Journals (Sweden)

    Zeno eJonke

    2016-03-01

    Full Text Available Network of neurons in the brain apply – unlike processors in our current generation ofcomputer hardware – an event-based processing strategy, where short pulses (spikes areemitted sparsely by neurons to signal the occurrence of an event at a particular point intime. Such spike-based computations promise to be substantially more power-efficient thantraditional clocked processing schemes. However it turned out to be surprisingly difficult todesign networks of spiking neurons that can solve difficult computational problems on the levelof single spikes (rather than rates of spikes. We present here a new method for designingnetworks of spiking neurons via an energy function. Furthermore we show how the energyfunction of a network of stochastically firing neurons can be shaped in a quite transparentmanner by composing the networks of simple stereotypical network motifs. We show that thisdesign approach enables networks of spiking neurons to produce approximate solutions todifficult (NP-hard constraint satisfaction problems from the domains of planning/optimizationand verification/logical inference. The resulting networks employ noise as a computationalresource. Nevertheless the timing of spikes (rather than just spike rates plays an essential rolein their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines and Gibbs sampling.

  12. Generalized activity equations for spiking neural network dynamics

    Directory of Open Access Journals (Sweden)

    Michael A Buice

    2013-11-01

    Full Text Available Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales - the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.

  13. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons.

    Science.gov (United States)

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.

  14. Causal Inference and Explaining Away in a Spiking Network

    Science.gov (United States)

    Moreno-Bote, Rubén; Drugowitsch, Jan

    2015-01-01

    While the brain uses spiking neurons for communication, theoretical research on brain computations has mostly focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, is largely unknown. Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons. The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities. The network infers the set of most likely causes from an observation using explaining away, which is dynamically implemented by spike-based, tuned inhibition. The algorithm performs remarkably well even when the network intrinsically generates variable spike trains, the timing of spikes is scrambled by external sources of noise, or the network is mistuned. This type of network might underlie tasks such as odor identification and classification. PMID:26621426

  15. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  16. Stochastic Variational Learning in Recurrent Spiking Networks

    Directory of Open Access Journals (Sweden)

    Danilo eJimenez Rezende

    2014-04-01

    Full Text Available The ability to learn and perform statistical inference with biologically plausible recurrent network of spiking neurons is an important step towards understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators conveying information about ``novelty on a statistically rigorous ground.Simulations show that our model is able to learn bothstationary and non-stationary patterns of spike trains.We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.

  17. Stochastic variational learning in recurrent spiking networks.

    Science.gov (United States)

    Jimenez Rezende, Danilo; Gerstner, Wulfram

    2014-01-01

    The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about "novelty" on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.

  18. Financial time series prediction using spiking neural networks.

    Science.gov (United States)

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  19. Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity

    Directory of Open Access Journals (Sweden)

    Benjamin eDummer

    2014-09-01

    Full Text Available A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, J. Comp. Neurosci. 2000 and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide excellent approximations to the autocorrelation of spike trains in the recurrent network.

  20. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  1. Decoding spikes in a spiking neuronal network

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, University of Sussex, Brighton BN1 9QH (United Kingdom); Ding, Mingzhou [Department of Mathematics, Florida Atlantic University, Boca Raton, FL 33431 (United States)

    2004-06-04

    We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs.

  2. Decoding spikes in a spiking neuronal network

    International Nuclear Information System (INIS)

    Feng Jianfeng; Ding, Mingzhou

    2004-01-01

    We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs

  3. Different propagation speeds of recalled sequences in plastic spiking neural networks

    Science.gov (United States)

    Huang, Xuhui; Zheng, Zhigang; Hu, Gang; Wu, Si; Rasch, Malte J.

    2015-03-01

    Neural networks can generate spatiotemporal patterns of spike activity. Sequential activity learning and retrieval have been observed in many brain areas, and e.g. is crucial for coding of episodic memory in the hippocampus or generating temporal patterns during song production in birds. In a recent study, a sequential activity pattern was directly entrained onto the neural activity of the primary visual cortex (V1) of rats and subsequently successfully recalled by a local and transient trigger. It was observed that the speed of activity propagation in coordinates of the retinotopically organized neural tissue was constant during retrieval regardless how the speed of light stimulation sweeping across the visual field during training was varied. It is well known that spike-timing dependent plasticity (STDP) is a potential mechanism for embedding temporal sequences into neural network activity. How training and retrieval speeds relate to each other and how network and learning parameters influence retrieval speeds, however, is not well described. We here theoretically analyze sequential activity learning and retrieval in a recurrent neural network with realistic synaptic short-term dynamics and STDP. Testing multiple STDP rules, we confirm that sequence learning can be achieved by STDP. However, we found that a multiplicative nearest-neighbor (NN) weight update rule generated weight distributions and recall activities that best matched the experiments in V1. Using network simulations and mean-field analysis, we further investigated the learning mechanisms and the influence of network parameters on recall speeds. Our analysis suggests that a multiplicative STDP rule with dominant NN spike interaction might be implemented in V1 since recall speed was almost constant in an NMDA-dominant regime. Interestingly, in an AMPA-dominant regime, neural circuits might exhibit recall speeds that instead follow the change in stimulus speeds. This prediction could be tested in

  4. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated...

  5. Building functional networks of spiking model neurons.

    Science.gov (United States)

    Abbott, L F; DePasquale, Brian; Memmesheimer, Raoul-Martin

    2016-03-01

    Most of the networks used by computer scientists and many of those studied by modelers in neuroscience represent unit activities as continuous variables. Neurons, however, communicate primarily through discontinuous spiking. We review methods for transferring our ability to construct interesting networks that perform relevant tasks from the artificial continuous domain to more realistic spiking network models. These methods raise a number of issues that warrant further theoretical and experimental study.

  6. Decoding spatiotemporal spike sequences via the finite state automata dynamics of spiking neural networks

    International Nuclear Information System (INIS)

    Jin, Dezhe Z

    2008-01-01

    Temporally complex stimuli are encoded into spatiotemporal spike sequences of neurons in many sensory areas. Here, we describe how downstream neurons with dendritic bistable plateau potentials can be connected to decode such spike sequences. Driven by feedforward inputs from the sensory neurons and controlled by feedforward inhibition and lateral excitation, the neurons transit between UP and DOWN states of the membrane potentials. The neurons spike only in the UP states. A decoding neuron spikes at the end of an input to signal the recognition of specific spike sequences. The transition dynamics is equivalent to that of a finite state automaton. A connection rule for the networks guarantees that any finite state automaton can be mapped into the transition dynamics, demonstrating the equivalence in computational power between the networks and finite state automata. The decoding mechanism is capable of recognizing an arbitrary number of spatiotemporal spike sequences, and is insensitive to the variations of the spike timings in the sequences

  7. Self-control with spiking and non-spiking neural networks playing games.

    Science.gov (United States)

    Christodoulou, Chris; Banfield, Gaye; Cleanthous, Aristodemos

    2010-01-01

    Self-control can be defined as choosing a large delayed reward over a small immediate reward, while precommitment is the making of a choice with the specific aim of denying oneself future choices. Humans recognise that they have self-control problems and attempt to overcome them by applying precommitment. Problems in exercising self-control, suggest a conflict between cognition and motivation, which has been linked to competition between higher and lower brain functions (representing the frontal lobes and the limbic system respectively). This premise of an internal process conflict, lead to a behavioural model being proposed, based on which, we implemented a computational model for studying and explaining self-control through precommitment behaviour. Our model consists of two neural networks, initially non-spiking and then spiking ones, representing the higher and lower brain systems viewed as cooperating for the benefit of the organism. The non-spiking neural networks are of simple feed forward multilayer type with reinforcement learning, one with selective bootstrap weight update rule, which is seen as myopic, representing the lower brain and the other with the temporal difference weight update rule, which is seen as far-sighted, representing the higher brain. The spiking neural networks are implemented with leaky integrate-and-fire neurons with learning based on stochastic synaptic transmission. The differentiating element between the two brain centres in this implementation is based on the memory of past actions determined by an eligibility trace time constant. As the structure of the self-control problem can be likened to the Iterated Prisoner's Dilemma (IPD) game in that cooperation is to defection what self-control is to impulsiveness or what compromising is to insisting, we implemented the neural networks as two players, learning simultaneously but independently, competing in the IPD game. With a technique resembling the precommitment effect, whereby the

  8. A Markovian event-based framework for stochastic spiking neural networks.

    Science.gov (United States)

    Touboul, Jonathan D; Faugeras, Olivier D

    2011-11-01

    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.

  9. Spiking Neural Networks Based on OxRAM Synapses for Real-Time Unsupervised Spike Sorting.

    Science.gov (United States)

    Werner, Thilo; Vianello, Elisa; Bichler, Olivier; Garbin, Daniele; Cattaert, Daniel; Yvert, Blaise; De Salvo, Barbara; Perniola, Luca

    2016-01-01

    In this paper, we present an alternative approach to perform spike sorting of complex brain signals based on spiking neural networks (SNN). The proposed architecture is suitable for hardware implementation by using resistive random access memory (RRAM) technology for the implementation of synapses whose low latency (spike sorting. This offers promising advantages to conventional spike sorting techniques for brain-computer interfaces (BCI) and neural prosthesis applications. Moreover, the ultra-low power consumption of the RRAM synapses of the spiking neural network (nW range) may enable the design of autonomous implantable devices for rehabilitation purposes. We demonstrate an original methodology to use Oxide based RRAM (OxRAM) as easy to program and low energy (Spike Timing Dependent Plasticity. Real spiking data have been recorded both intra- and extracellularly from an in-vitro preparation of the Crayfish sensory-motor system and used for validation of the proposed OxRAM based SNN. This artificial SNN is able to identify, learn, recognize and distinguish between different spike shapes in the input signal with a recognition rate about 90% without any supervision.

  10. Introduction to spiking neural networks: Information processing, learning and applications.

    Science.gov (United States)

    Ponulak, Filip; Kasinski, Andrzej

    2011-01-01

    The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.

  11. Effects of Spike Anticipation on the Spiking Dynamics of Neural Networks

    Directory of Open Access Journals (Sweden)

    Daniel ede Santos-Sierra

    2015-11-01

    Full Text Available Synchronization is one of the central phenomena involved in information processing in living systems. It is known that the nervous system requires the coordinated activity of both local and distant neural populations. Such an interplay allows to merge different information modalities in a whole processing supporting high-level mental skills as understanding, memory, abstraction, etc. Though the biological processes underlying synchronization in the brain are not fully understood there have been reported a variety of mechanisms supporting different types of synchronization both at theoretical and experimental level. One of the more intriguing of these phenomena is the anticipating synchronization, which has been recently reported in a pair of unidirectionally coupled artificial neurons under simple conditions cite{Pyragas}, where the slave neuron is able to anticipate in time the behaviour of the master one. In this paper we explore the effect of spike anticipation over the information processing performed by a neural network at functional and structural level. We show that the introduction of intermediary neurons in the network enhances spike anticipation and analyse how these variations in spike anticipation can significantly change the firing regime of the neural network according to its functional and structural properties. In addition we show that the interspike interval (ISI, one of the main features of the neural response associated to the information coding, can be closely related to spike anticipation by each spike, and how synaptic plasticity can be modulated through that relationship. This study has been performed through numerical simulation of a coupled system of Hindmarsh-Rose neurons.

  12. Effects of Spike Anticipation on the Spiking Dynamics of Neural Networks.

    Science.gov (United States)

    de Santos-Sierra, Daniel; Sanchez-Jimenez, Abel; Garcia-Vellisca, Mariano A; Navas, Adrian; Villacorta-Atienza, Jose A

    2015-01-01

    Synchronization is one of the central phenomena involved in information processing in living systems. It is known that the nervous system requires the coordinated activity of both local and distant neural populations. Such an interplay allows to merge different information modalities in a whole processing supporting high-level mental skills as understanding, memory, abstraction, etc. Though, the biological processes underlying synchronization in the brain are not fully understood there have been reported a variety of mechanisms supporting different types of synchronization both at theoretical and experimental level. One of the more intriguing of these phenomena is the anticipating synchronization, which has been recently reported in a pair of unidirectionally coupled artificial neurons under simple conditions (Pyragiene and Pyragas, 2013), where the slave neuron is able to anticipate in time the behavior of the master one. In this paper, we explore the effect of spike anticipation over the information processing performed by a neural network at functional and structural level. We show that the introduction of intermediary neurons in the network enhances spike anticipation and analyse how these variations in spike anticipation can significantly change the firing regime of the neural network according to its functional and structural properties. In addition we show that the interspike interval (ISI), one of the main features of the neural response associated with the information coding, can be closely related to spike anticipation by each spike, and how synaptic plasticity can be modulated through that relationship. This study has been performed through numerical simulation of a coupled system of Hindmarsh-Rose neurons.

  13. Spike Code Flow in Cultured Neuronal Networks.

    Science.gov (United States)

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei

    2016-01-01

    We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  14. Predictive coding of dynamical variables in balanced spiking networks.

    Science.gov (United States)

    Boerlin, Martin; Machens, Christian K; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.

  15. Computing with Spiking Neuron Networks

    NARCIS (Netherlands)

    H. Paugam-Moisy; S.M. Bohte (Sander); G. Rozenberg; T.H.W. Baeck (Thomas); J.N. Kok (Joost)

    2012-01-01

    htmlabstractAbstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions

  16. Spiking neural networks for handwritten digit recognition-Supervised learning and network optimization.

    Science.gov (United States)

    Kulkarni, Shruti R; Rajendran, Bipin

    2018-07-01

    We demonstrate supervised learning in Spiking Neural Networks (SNNs) for the problem of handwritten digit recognition using the spike triggered Normalized Approximate Descent (NormAD) algorithm. Our network that employs neurons operating at sparse biological spike rates below 300Hz achieves a classification accuracy of 98.17% on the MNIST test database with four times fewer parameters compared to the state-of-the-art. We present several insights from extensive numerical experiments regarding optimization of learning parameters and network configuration to improve its accuracy. We also describe a number of strategies to optimize the SNN for implementation in memory and energy constrained hardware, including approximations in computing the neuronal dynamics and reduced precision in storing the synaptic weights. Experiments reveal that even with 3-bit synaptic weights, the classification accuracy of the designed SNN does not degrade beyond 1% as compared to the floating-point baseline. Further, the proposed SNN, which is trained based on the precise spike timing information outperforms an equivalent non-spiking artificial neural network (ANN) trained using back propagation, especially at low bit precision. Thus, our study shows the potential for realizing efficient neuromorphic systems that use spike based information encoding and learning for real-world applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Spike Code Flow in Cultured Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Shinichi Tamura

    2016-01-01

    Full Text Available We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  18. The Second Spiking Threshold: Dynamics of Laminar Network Spiking in the Visual Cortex

    DEFF Research Database (Denmark)

    Forsberg, Lars E.; Bonde, Lars H.; Harvey, Michael A.

    2016-01-01

    and moving visual stimuli from the spontaneous ongoing spiking state, in all layers and zones of areas 17 and 18 indicating that the second threshold is a property of the network. Spontaneous and evoked spiking, thus can easily be distinguished. In addition, the trajectories of spontaneous ongoing states......Most neurons have a threshold separating the silent non-spiking state and the state of producing temporal sequences of spikes. But neurons in vivo also have a second threshold, found recently in granular layer neurons of the primary visual cortex, separating spontaneous ongoing spiking from...... visually evoked spiking driven by sharp transients. Here we examine whether this second threshold exists outside the granular layer and examine details of transitions between spiking states in ferrets exposed to moving objects. We found the second threshold, separating spiking states evoked by stationary...

  19. Fast convergence of spike sequences to periodic patterns in recurrent networks

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.

    2002-01-01

    The dynamical attractors are thought to underlie many biological functions of recurrent neural networks. Here we show that stable periodic spike sequences with precise timings are the attractors of the spiking dynamics of recurrent neural networks with global inhibition. Almost all spike sequences converge within a finite number of transient spikes to these attractors. The convergence is fast, especially when the global inhibition is strong. These results support the possibility that precise spatiotemporal sequences of spikes are useful for information encoding and processing in biological neural networks

  20. Clustering predicts memory performance in networks of spiking and non-spiking neurons

    Directory of Open Access Journals (Sweden)

    Weiliang eChen

    2011-03-01

    Full Text Available The problem we address in this paper is that of finding effective and parsimonious patterns of connectivity in sparse associative memories. This problem must be addressed in real neuronal systems, so that results in artificial systems could throw light on real systems. We show that there are efficient patterns of connectivity and that these patterns are effective in models with either spiking or non-spiking neurons. This suggests that there may be some underlying general principles governing good connectivity in such networks. We also show that the clustering of the network, measured by Clustering Coefficient, has a strong linear correlation to the performance of associative memory. This result is important since a purely static measure of network connectivity appears to determine an important dynamic property of the network.

  1. Phase Diagram of Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamed eSeyed-Allaei

    2015-03-01

    Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network.

  2. Character recognition from trajectory by recurrent spiking neural networks.

    Science.gov (United States)

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  3. Structured chaos shapes spike-response noise entropy in balanced neural networks

    Directory of Open Access Journals (Sweden)

    Guillaume eLajoie

    2014-10-01

    Full Text Available Large networks of sparsely coupled, excitatory and inhibitory cells occur throughout the brain. For many models of these networks, a striking feature is that their dynamics are chaotic and thus, are sensitive to small perturbations. How does this chaos manifest in the neural code? Specifically, how variable are the spike patterns that such a network produces in response to an input signal? To answer this, we derive a bound for a general measure of variability -- spike-train entropy. This leads to important insights on the variability of multi-cell spike pattern distributions in large recurrent networks of spiking neurons responding to fluctuating inputs. The analysis is based on results from random dynamical systems theory and is complemented by detailed numerical simulations. We find that the spike pattern entropy is an order of magnitude lower than what would be extrapolated from single cells. This holds despite the fact that network coupling becomes vanishingly sparse as network size grows -- a phenomenon that depends on ``extensive chaos, as previously discovered for balanced networks without stimulus drive. Moreover, we show how spike pattern entropy is controlled by temporal features of the inputs. Our findings provide insight into how neural networks may encode stimuli in the presence of inherently chaotic dynamics.

  4. Fast computation with spikes in a recurrent neural network

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.; Seung, H. Sebastian

    2002-01-01

    Neural networks with recurrent connections are sometimes regarded as too slow at computation to serve as models of the brain. Here we analytically study a counterexample, a network consisting of N integrate-and-fire neurons with self excitation, all-to-all inhibition, instantaneous synaptic coupling, and constant external driving inputs. When the inhibition and/or excitation are large enough, the network performs a winner-take-all computation for all possible external inputs and initial states of the network. The computation is done very quickly: As soon as the winner spikes once, the computation is completed since no other neurons will spike. For some initial states, the winner is the first neuron to spike, and the computation is done at the first spike of the network. In general, there are M potential winners, corresponding to the top M external inputs. When the external inputs are close in magnitude, M tends to be larger. If M>1, the selection of the actual winner is strongly influenced by the initial states. If a special relation between the excitation and inhibition is satisfied, the network always selects the neuron with the maximum external input as the winner

  5. Communicating embedded systems networks applications

    CERN Document Server

    Krief, Francine

    2013-01-01

    Embedded systems become more and more complex and require having some knowledge in various disciplines such as electronics, data processing, telecommunications and networks. Without detailing all the aspects related to the design of embedded systems, this book, which was written by specialists in electronics, data processing and telecommunications and networks, gives an interesting point of view of communication techniques and problems in embedded systems. This choice is easily justified by the fact that embedded systems are today massively communicating and that telecommunications and network

  6. Error-backpropagation in temporally encoded networks of spiking neurons

    NARCIS (Netherlands)

    S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)

    2000-01-01

    textabstractFor a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, emph{SpikeProp, akin to traditional error-backpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm,

  7. Knowledge extraction from evolving spiking neural networks with rank order population coding.

    Science.gov (United States)

    Soltic, Snjezana; Kasabov, Nikola

    2010-12-01

    This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.

  8. Evolving spiking networks with variable resistive memories.

    Science.gov (United States)

    Howard, Gerard; Bull, Larry; de Lacy Costello, Ben; Gale, Ella; Adamatzky, Andrew

    2014-01-01

    Neuromorphic computing is a brainlike information processing paradigm that requires adaptive learning mechanisms. A spiking neuro-evolutionary system is used for this purpose; plastic resistive memories are implemented as synapses in spiking neural networks. The evolutionary design process exploits parameter self-adaptation and allows the topology and synaptic weights to be evolved for each network in an autonomous manner. Variable resistive memories are the focus of this research; each synapse has its own conductance profile which modifies the plastic behaviour of the device and may be altered during evolution. These variable resistive networks are evaluated on a noisy robotic dynamic-reward scenario against two static resistive memories and a system containing standard connections only. The results indicate that the extra behavioural degrees of freedom available to the networks incorporating variable resistive memories enable them to outperform the comparative synapse types.

  9. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  10. Linking structure and activity in nonlinear spiking networks.

    Directory of Open Access Journals (Sweden)

    Gabriel Koch Ocker

    2017-06-01

    Full Text Available Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks' spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities-including those of different cell types-combine with connectivity to shape population activity and function.

  11. Linking structure and activity in nonlinear spiking networks.

    Science.gov (United States)

    Ocker, Gabriel Koch; Josić, Krešimir; Shea-Brown, Eric; Buice, Michael A

    2017-06-01

    Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks' spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities-including those of different cell types-combine with connectivity to shape population activity and function.

  12. Noise influence on spike activation in a Hindmarsh–Rose small-world neural network

    International Nuclear Information System (INIS)

    Zhe, Sun; Micheletto, Ruggero

    2016-01-01

    We studied the role of noise in neural networks, especially focusing on its relation to the propagation of spike activity in a small sized system. We set up a source of information using a single neuron that is constantly spiking. This element called initiator x o feeds spikes to the rest of the network that is initially quiescent and subsequently reacts with vigorous spiking after a transitional period of time. We found that noise quickly suppresses the initiator’s influence and favors spontaneous spike activity and, using a decibel representation of noise intensity, we established a linear relationship between noise amplitude and the interval from the initiator’s first spike and the rest of the network activation. We studied the same process with networks of different sizes (number of neurons) and found that the initiator x o has a measurable influence on small networks, but as the network grows in size, spontaneous spiking emerges disrupting its effects on networks of more than about N = 100 neurons. This suggests that the mechanism of internal noise generation allows information transmission within a small neural neighborhood, but decays for bigger network domains. We also analyzed the Fourier spectrum of the whole network membrane potential and verified that noise provokes the reduction of main θ and α peaks before transitioning into chaotic spiking. However, network size does not reproduce a similar phenomena; instead we recorded a reduction in peaks’ amplitude, a better sharpness and definition of Fourier peaks, but not the evident degeneration to chaos observed with increasing external noise. This work aims to contribute to the understanding of the fundamental mechanisms of propagation of spontaneous spiking in neural networks and gives a quantitative assessment of how noise can be used to control and modulate this phenomenon in Hindmarsh−Rose (H−R) neural networks. (paper)

  13. Noise influence on spike activation in a Hindmarsh-Rose small-world neural network

    Science.gov (United States)

    Zhe, Sun; Micheletto, Ruggero

    2016-07-01

    We studied the role of noise in neural networks, especially focusing on its relation to the propagation of spike activity in a small sized system. We set up a source of information using a single neuron that is constantly spiking. This element called initiator x o feeds spikes to the rest of the network that is initially quiescent and subsequently reacts with vigorous spiking after a transitional period of time. We found that noise quickly suppresses the initiator’s influence and favors spontaneous spike activity and, using a decibel representation of noise intensity, we established a linear relationship between noise amplitude and the interval from the initiator’s first spike and the rest of the network activation. We studied the same process with networks of different sizes (number of neurons) and found that the initiator x o has a measurable influence on small networks, but as the network grows in size, spontaneous spiking emerges disrupting its effects on networks of more than about N = 100 neurons. This suggests that the mechanism of internal noise generation allows information transmission within a small neural neighborhood, but decays for bigger network domains. We also analyzed the Fourier spectrum of the whole network membrane potential and verified that noise provokes the reduction of main θ and α peaks before transitioning into chaotic spiking. However, network size does not reproduce a similar phenomena; instead we recorded a reduction in peaks’ amplitude, a better sharpness and definition of Fourier peaks, but not the evident degeneration to chaos observed with increasing external noise. This work aims to contribute to the understanding of the fundamental mechanisms of propagation of spontaneous spiking in neural networks and gives a quantitative assessment of how noise can be used to control and modulate this phenomenon in Hindmarsh-Rose (H-R) neural networks.

  14. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

    Directory of Open Access Journals (Sweden)

    Jan Hahne

    2017-05-01

    Full Text Available Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  15. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    Science.gov (United States)

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  16. Transformation-invariant visual representations in self-organizing spiking neural networks.

    Science.gov (United States)

    Evans, Benjamin D; Stringer, Simon M

    2012-01-01

    The ventral visual pathway achieves object and face recognition by building transformation-invariant representations from elementary visual features. In previous computer simulation studies with rate-coded neural networks, the development of transformation-invariant representations has been demonstrated using either of two biologically plausible learning mechanisms, Trace learning and Continuous Transformation (CT) learning. However, it has not previously been investigated how transformation-invariant representations may be learned in a more biologically accurate spiking neural network. A key issue is how the synaptic connection strengths in such a spiking network might self-organize through Spike-Time Dependent Plasticity (STDP) where the change in synaptic strength is dependent on the relative times of the spikes emitted by the presynaptic and postsynaptic neurons rather than simply correlated activity driving changes in synaptic efficacy. Here we present simulations with conductance-based integrate-and-fire (IF) neurons using a STDP learning rule to address these gaps in our understanding. It is demonstrated that with the appropriate selection of model parameters and training regime, the spiking network model can utilize either Trace-like or CT-like learning mechanisms to achieve transform-invariant representations.

  17. Transform-invariant visual representations in self-organizing spiking neural networks

    Directory of Open Access Journals (Sweden)

    Benjamin eEvans

    2012-07-01

    Full Text Available The ventral visual pathway achieves object and face recognition by building transform-invariant representations from elementary visual features. In previous computer simulation studies with rate-coded neural networks, the development of transform invariant representations has been demonstrated using either of two biologically plausible learning mechanisms, Trace learning and Continuous Transformation (CT learning. However, it has not previously been investigated how transform invariant representations may be learned in a more biologically accurate spiking neural network. A key issue is how the synaptic connection strengths in such a spiking network might self-organize through Spike-Time Dependent Plasticity (STDP where the change in synaptic strength is dependent on the relative times of the spikes emitted by the pre- and postsynaptic neurons rather than simply correlated activity driving changes in synaptic efficacy. Here we present simulations with conductance-based integrate-and-fire (IF neurons using a STDP learning rule to address these gaps in our understanding. It is demonstrated that with the appropriate selection of model pa- rameters and training regime, the spiking network model can utilize either Trace-like or CT-like learning mechanisms to achieve transform-invariant representations.

  18. Inference of neuronal network spike dynamics and topology from calcium imaging data

    Directory of Open Access Journals (Sweden)

    Henry eLütcke

    2013-12-01

    Full Text Available Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP occurrence ('spike trains' from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties.

  19. Versatile Networks of Simulated Spiking Neurons Displaying Winner-Take-All Behavior

    Directory of Open Access Journals (Sweden)

    Yanqing eChen

    2013-03-01

    Full Text Available We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS. In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid Brain-Based-Device (BBD under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.

  20. Versatile networks of simulated spiking neurons displaying winner-take-all behavior.

    Science.gov (United States)

    Chen, Yanqing; McKinstry, Jeffrey L; Edelman, Gerald M

    2013-01-01

    We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA) behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS). In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid brain-based-device (BBD) under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.

  1. The Second Spiking Threshold: Dynamics of Laminar Network Spiking in the Visual Cortex

    Science.gov (United States)

    Forsberg, Lars E.; Bonde, Lars H.; Harvey, Michael A.; Roland, Per E.

    2016-01-01

    Most neurons have a threshold separating the silent non-spiking state and the state of producing temporal sequences of spikes. But neurons in vivo also have a second threshold, found recently in granular layer neurons of the primary visual cortex, separating spontaneous ongoing spiking from visually evoked spiking driven by sharp transients. Here we examine whether this second threshold exists outside the granular layer and examine details of transitions between spiking states in ferrets exposed to moving objects. We found the second threshold, separating spiking states evoked by stationary and moving visual stimuli from the spontaneous ongoing spiking state, in all layers and zones of areas 17 and 18 indicating that the second threshold is a property of the network. Spontaneous and evoked spiking, thus can easily be distinguished. In addition, the trajectories of spontaneous ongoing states were slow, frequently changing direction. In single trials, sharp as well as smooth and slow transients transform the trajectories to be outward directed, fast and crossing the threshold to become evoked. Although the speeds of the evolution of the evoked states differ, the same domain of the state space is explored indicating uniformity of the evoked states. All evoked states return to the spontaneous evoked spiking state as in a typical mono-stable dynamical system. In single trials, neither the original spiking rates, nor the temporal evolution in state space could distinguish simple visual scenes. PMID:27582693

  2. Spike-timing computation properties of a feed-forward neural network model

    Directory of Open Access Journals (Sweden)

    Drew Benjamin Sinha

    2014-01-01

    Full Text Available Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g. serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.

  3. Bistability induces episodic spike communication by inhibitory neurons in neuronal networks.

    Science.gov (United States)

    Kazantsev, V B; Asatryan, S Yu

    2011-09-01

    Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.

  4. Motif statistics and spike correlations in neuronal networks

    International Nuclear Information System (INIS)

    Hu, Yu; Shea-Brown, Eric; Trousdale, James; Josić, Krešimir

    2013-01-01

    Motifs are patterns of subgraphs of complex networks. We studied the impact of such patterns of connectivity on the level of correlated, or synchronized, spiking activity among pairs of cells in a recurrent network of integrate and fire neurons. For a range of network architectures, we find that the pairwise correlation coefficients, averaged across the network, can be closely approximated using only three statistics of network connectivity. These are the overall network connection probability and the frequencies of two second order motifs: diverging motifs, in which one cell provides input to two others, and chain motifs, in which two cells are connected via a third intermediary cell. Specifically, the prevalence of diverging and chain motifs tends to increase correlation. Our method is based on linear response theory, which enables us to express spiking statistics using linear algebra, and a resumming technique, which extrapolates from second order motifs to predict the overall effect of coupling on network correlation. Our motif-based results seek to isolate the effect of network architecture perturbatively from a known network state. (paper)

  5. Unsupervised clustering with spiking neurons by sparse temporal coding and multi-layer RBF networks

    NARCIS (Netherlands)

    S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)

    2000-01-01

    textabstractWe demonstrate that spiking neural networks encoding information in spike times are capable of computing and learning clusters from realistic data. We show how a spiking neural network based on spike-time coding and Hebbian learning can successfully perform unsupervised clustering on

  6. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning.

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung

    2018-02-01

    Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  7. Spike Pattern Structure Influences Synaptic Efficacy Variability Under STDP and Synaptic Homeostasis. II: Spike Shuffling Methods on LIF Networks

    Directory of Open Access Journals (Sweden)

    Zedong Bi

    2016-08-01

    Full Text Available Synapses may undergo variable changes during plasticity because of the variability of spike patterns such as temporal stochasticity and spatial randomness. Here, we call the variability of synaptic weight changes during plasticity to be efficacy variability. In this paper, we investigate how four aspects of spike pattern statistics (i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations influence the efficacy variability under pair-wise additive spike-timing dependent plasticity (STDP and synaptic homeostasis (the mean strength of plastic synapses into a neuron is bounded, by implementing spike shuffling methods onto spike patterns self-organized by a network of excitatory and inhibitory leaky integrate-and-fire (LIF neurons. With the increase of the decay time scale of the inhibitory synaptic currents, the LIF network undergoes a transition from asynchronous state to weak synchronous state and then to synchronous bursting state. We first shuffle these spike patterns using a variety of methods, each designed to evidently change a specific pattern statistics; and then investigate the change of efficacy variability of the synapses under STDP and synaptic homeostasis, when the neurons in the network fire according to the spike patterns before and after being treated by a shuffling method. In this way, we can understand how the change of pattern statistics may cause the change of efficacy variability. Our results are consistent with those of our previous study which implements spike-generating models on converging motifs. We also find that burstiness/regularity is important to determine the efficacy variability under asynchronous states, while heterogeneity of cross-correlations is the main factor to cause efficacy variability when the network moves into synchronous bursting states (the states observed in epilepsy.

  8. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  9. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-06

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  10. Neuronal spike sorting based on radial basis function neural networks

    Directory of Open Access Journals (Sweden)

    Taghavi Kani M

    2011-02-01

    Full Text Available "nBackground: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system."n "nMethods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used."n "nResults: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity."n "nConclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting.

  11. Communication through resonance in spiking neuronal networks.

    Science.gov (United States)

    Hahn, Gerald; Bujan, Alejandro F; Frégnac, Yves; Aertsen, Ad; Kumar, Arvind

    2014-08-01

    The cortex processes stimuli through a distributed network of specialized brain areas. This processing requires mechanisms that can route neuronal activity across weakly connected cortical regions. Routing models proposed thus far are either limited to propagation of spiking activity across strongly connected networks or require distinct mechanisms that create local oscillations and establish their coherence between distant cortical areas. Here, we propose a novel mechanism which explains how synchronous spiking activity propagates across weakly connected brain areas supported by oscillations. In our model, oscillatory activity unleashes network resonance that amplifies feeble synchronous signals and promotes their propagation along weak connections ("communication through resonance"). The emergence of coherent oscillations is a natural consequence of synchronous activity propagation and therefore the assumption of different mechanisms that create oscillations and provide coherence is not necessary. Moreover, the phase-locking of oscillations is a side effect of communication rather than its requirement. Finally, we show how the state of ongoing activity could affect the communication through resonance and propose that modulations of the ongoing activity state could influence information processing in distributed cortical networks.

  12. Embedded, everywhere: a research agenda for networked systems of embedded computers

    National Research Council Canada - National Science Library

    Committee on Networked Systems of Embedded Computers; National Research Council Staff; Division on Engineering and Physical Sciences; Computer Science and Telecommunications Board; National Academy of Sciences

    2001-01-01

    .... Embedded, Everywhere explores the potential of networked systems of embedded computers and the research challenges arising from embedding computation and communications technology into a wide variety of applicationsâ...

  13. Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); S.M. Bohte (Sander)

    2016-01-01

    textabstractBiological neurons communicate with a sparing exchange of pulses - spikes. It is an open question how real spiking neurons produce the kind of powerful neural computation that is possible with deep artificial neural networks, using only so very few spikes to communicate. Building on

  14. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.

    Directory of Open Access Journals (Sweden)

    Gabriel Koch Ocker

    2015-08-01

    Full Text Available The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

  15. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.

    Science.gov (United States)

    Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent

    2015-08-01

    The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

  16. SpikeTemp: An Enhanced Rank-Order-Based Learning Approach for Spiking Neural Networks With Adaptive Structure.

    Science.gov (United States)

    Wang, Jinling; Belatreche, Ammar; Maguire, Liam P; McGinnity, Thomas Martin

    2017-01-01

    This paper presents an enhanced rank-order-based learning algorithm, called SpikeTemp, for spiking neural networks (SNNs) with a dynamically adaptive structure. The trained feed-forward SNN consists of two layers of spiking neurons: 1) an encoding layer which temporally encodes real-valued features into spatio-temporal spike patterns and 2) an output layer of dynamically grown neurons which perform spatio-temporal classification. Both Gaussian receptive fields and square cosine population encoding schemes are employed to encode real-valued features into spatio-temporal spike patterns. Unlike the rank-order-based learning approach, SpikeTemp uses the precise times of the incoming spikes for adjusting the synaptic weights such that early spikes result in a large weight change and late spikes lead to a smaller weight change. This removes the need to rank all the incoming spikes and, thus, reduces the computational cost of SpikeTemp. The proposed SpikeTemp algorithm is demonstrated on several benchmark data sets and on an image recognition task. The results show that SpikeTemp can achieve better classification performance and is much faster than the existing rank-order-based learning approach. In addition, the number of output neurons is much smaller when the square cosine encoding scheme is employed. Furthermore, SpikeTemp is benchmarked against a selection of existing machine learning algorithms, and the results demonstrate the ability of SpikeTemp to classify different data sets after just one presentation of the training samples with comparable classification performance.

  17. Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks.

    Science.gov (United States)

    Gardner, Brian; Sporea, Ioana; Grüning, André

    2015-12-01

    Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we examine how multilayered networks of spiking neurons can learn to encode for input patterns using a fully temporal coding scheme. To this end, we introduce a new supervised learning rule, MultilayerSpiker, that can train spiking networks containing hidden layer neurons to perform transformations between spatiotemporal input and output spike patterns. The performance of the proposed learning rule is demonstrated in terms of the number of pattern mappings it can learn, the complexity of network structures it can be used on, and its classification accuracy when using multispike-based encodings. In particular, the learning rule displays robustness against input noise and can generalize well on an example data set. Our approach contributes to both a systematic understanding of how computations might take place in the nervous system and a learning rule that displays strong technical capability.

  18. Enhanced polychronisation in a spiking network with metaplasticity

    Directory of Open Access Journals (Sweden)

    Mira eGuise

    2015-02-01

    Full Text Available Computational models of metaplasticity have usually focused on the modeling of single synapses (Shouval et al., 2002. In this paper we study the effect of metaplasticity on network behavior. Our guiding assumption is that the primary purpose of metaplasticity is to regulate synaptic plasticity, by increasing it when input is low and decreasing it when input is high. For our experiments we adopt a model of metaplasticity that demonstrably has this effect for a single synapse; our primary interest is in how metaplasticity thus defined affects network-level phenomena. We focus on a network-level phenomenon called polychronicity, that has a potential role in representation and memory. A network with polychronicity has the ability to produce non-synchronous but precisely timed sequences of neural firing events that can arise from strongly connected groups of neurons called polychronous neural groups (Izhikevich et al., 2004; Izhikevich, 2006a. Polychronous groups (PNGs develop readily when spiking networks are exposed to repeated spatio-temporal stimuli under the influence of spike-timing-dependent plasticity (STDP, but are sensitive to changes in synaptic weight distribution. We use a technique we have recently developed called Response Fingerprinting to show that PNGs formed in the presence of metaplasticity are significantly larger than those with no metaplasticity. A potential mechanism for this enhancement is proposed that links an inherent property of integrator type neurons called spike latency to an increase in the tolerance of PNG neurons to jitter in their inputs.

  19. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung

    2018-02-01

    Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  20. Improving Spiking Dynamical Networks: Accurate Delays, Higher-Order Synapses, and Time Cells.

    Science.gov (United States)

    Voelker, Aaron R; Eliasmith, Chris

    2018-03-01

    Researchers building spiking neural networks face the challenge of improving the biological plausibility of their model networks while maintaining the ability to quantitatively characterize network behavior. In this work, we extend the theory behind the neural engineering framework (NEF), a method of building spiking dynamical networks, to permit the use of a broad class of synapse models while maintaining prescribed dynamics up to a given order. This theory improves our understanding of how low-level synaptic properties alter the accuracy of high-level computations in spiking dynamical networks. For completeness, we provide characterizations for both continuous-time (i.e., analog) and discrete-time (i.e., digital) simulations. We demonstrate the utility of these extensions by mapping an optimal delay line onto various spiking dynamical networks using higher-order models of the synapse. We show that these networks nonlinearly encode rolling windows of input history, using a scale invariant representation, with accuracy depending on the frequency content of the input signal. Finally, we reveal that these methods provide a novel explanation of time cell responses during a delay task, which have been observed throughout hippocampus, striatum, and cortex.

  1. Bio-inspired spiking neural network for nonlinear systems control.

    Science.gov (United States)

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Adaptive coupling optimized spiking coherence and synchronization in Newman-Watts neuronal networks.

    Science.gov (United States)

    Gong, Yubing; Xu, Bo; Wu, Ya'nan

    2013-09-01

    In this paper, we have numerically studied the effect of adaptive coupling on the temporal coherence and synchronization of spiking activity in Newman-Watts Hodgkin-Huxley neuronal networks. It is found that random shortcuts can enhance the spiking synchronization more rapidly when the increment speed of adaptive coupling is increased and can optimize the temporal coherence of spikes only when the increment speed of adaptive coupling is appropriate. It is also found that adaptive coupling strength can enhance the synchronization of spikes and can optimize the temporal coherence of spikes when random shortcuts are appropriate. These results show that adaptive coupling has a big influence on random shortcuts related spiking activity and can enhance and optimize the temporal coherence and synchronization of spiking activity of the network. These findings can help better understand the roles of adaptive coupling for improving the information processing and transmission in neural systems.

  3. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

  4. Coherent and intermittent ensemble oscillations emerge from networks of irregular spiking neurons.

    Science.gov (United States)

    Hoseini, Mahmood S; Wessel, Ralf

    2016-01-01

    Local field potential (LFP) recordings from spatially distant cortical circuits reveal episodes of coherent gamma oscillations that are intermittent, and of variable peak frequency and duration. Concurrently, single neuron spiking remains largely irregular and of low rate. The underlying potential mechanisms of this emergent network activity have long been debated. Here we reproduce such intermittent ensemble oscillations in a model network, consisting of excitatory and inhibitory model neurons with the characteristics of regular-spiking (RS) pyramidal neurons, and fast-spiking (FS) and low-threshold spiking (LTS) interneurons. We find that fluctuations in the external inputs trigger reciprocally connected and irregularly spiking RS and FS neurons in episodes of ensemble oscillations, which are terminated by the recruitment of the LTS population with concurrent accumulation of inhibitory conductance in both RS and FS neurons. The model qualitatively reproduces experimentally observed phase drift, oscillation episode duration distributions, variation in the peak frequency, and the concurrent irregular single-neuron spiking at low rate. Furthermore, consistent with previous experimental studies using optogenetic manipulation, periodic activation of FS, but not RS, model neurons causes enhancement of gamma oscillations. In addition, increasing the coupling between two model networks from low to high reveals a transition from independent intermittent oscillations to coherent intermittent oscillations. In conclusion, the model network suggests biologically plausible mechanisms for the generation of episodes of coherent intermittent ensemble oscillations with irregular spiking neurons in cortical circuits. Copyright © 2016 the American Physiological Society.

  5. Inverse stochastic resonance in networks of spiking neurons.

    Science.gov (United States)

    Uzuntarla, Muhammet; Barreto, Ernest; Torres, Joaquin J

    2017-07-01

    Inverse Stochastic Resonance (ISR) is a phenomenon in which the average spiking rate of a neuron exhibits a minimum with respect to noise. ISR has been studied in individual neurons, but here, we investigate ISR in scale-free networks, where the average spiking rate is calculated over the neuronal population. We use Hodgkin-Huxley model neurons with channel noise (i.e., stochastic gating variable dynamics), and the network connectivity is implemented via electrical or chemical connections (i.e., gap junctions or excitatory/inhibitory synapses). We find that the emergence of ISR depends on the interplay between each neuron's intrinsic dynamical structure, channel noise, and network inputs, where the latter in turn depend on network structure parameters. We observe that with weak gap junction or excitatory synaptic coupling, network heterogeneity and sparseness tend to favor the emergence of ISR. With inhibitory coupling, ISR is quite robust. We also identify dynamical mechanisms that underlie various features of this ISR behavior. Our results suggest possible ways of experimentally observing ISR in actual neuronal systems.

  6. Supervised learning in spiking neural networks with FORCE training.

    Science.gov (United States)

    Nicola, Wilten; Clopath, Claudia

    2017-12-20

    Populations of neurons display an extraordinary diversity in the behaviors they affect and display. Machine learning techniques have recently emerged that allow us to create networks of model neurons that display behaviors of similar complexity. Here we demonstrate the direct applicability of one such technique, the FORCE method, to spiking neural networks. We train these networks to mimic dynamical systems, classify inputs, and store discrete sequences that correspond to the notes of a song. Finally, we use FORCE training to create two biologically motivated model circuits. One is inspired by the zebra finch and successfully reproduces songbird singing. The second network is motivated by the hippocampus and is trained to store and replay a movie scene. FORCE trained networks reproduce behaviors comparable in complexity to their inspired circuits and yield information not easily obtainable with other techniques, such as behavioral responses to pharmacological manipulations and spike timing statistics.

  7. Evolving spiking neural networks: a novel growth algorithm exhibits unintelligent design

    Science.gov (United States)

    Schaffer, J. David

    2015-06-01

    Spiking neural networks (SNNs) have drawn considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. Experiments show the algorithm producing SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. In addition, the output spike patterns retain evidence of the specific perturbation of the inputs, a feature that could be exploited by network additions that could use this information for refined decision making if required. On a second task, a sequence detector, a discriminating design was found that might be considered an example of "unintelligent design"; extra non-functional neurons were included that, while inefficient, did not hamper its proper functioning.

  8. iSpike: a spiking neural interface for the iCub robot

    International Nuclear Information System (INIS)

    Gamez, D; Fidjeland, A K; Lazdins, E

    2012-01-01

    This paper presents iSpike: a C++ library that interfaces between spiking neural network simulators and the iCub humanoid robot. It uses a biologically inspired approach to convert the robot’s sensory information into spikes that are passed to the neural network simulator, and it decodes output spikes from the network into motor signals that are sent to control the robot. Applications of iSpike range from embodied models of the brain to the development of intelligent robots using biologically inspired spiking neural networks. iSpike is an open source library that is available for free download under the terms of the GPL. (paper)

  9. Stochastic synchronization in finite size spiking networks

    Science.gov (United States)

    Doiron, Brent; Rinzel, John; Reyes, Alex

    2006-09-01

    We study a stochastic synchronization of spiking activity in feedforward networks of integrate-and-fire model neurons. A stochastic mean field analysis shows that synchronization occurs only when the network size is sufficiently small. This gives evidence that the dynamics, and hence processing, of finite size populations can be drastically different from that observed in the infinite size limit. Our results agree with experimentally observed synchrony in cortical networks, and further strengthen the link between synchrony and propagation in cortical systems.

  10. A reanalysis of "Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons".

    Science.gov (United States)

    Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred

    2016-01-01

    Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  11. Virtual network embedding in cross-domain network based on topology and resource attributes

    Science.gov (United States)

    Zhu, Lei; Zhang, Zhizhong; Feng, Linlin; Liu, Lilan

    2018-03-01

    Aiming at the network architecture ossification and the diversity of access technologies issues, this paper researches the cross-domain virtual network embedding algorithm. By analysing the topological attribute from the local and global perspective of nodes in the virtual network and the physical network, combined with the local network resource property, we rank the embedding priority of the nodes with PCA and TOPSIS methods. Besides, the link load distribution is considered. Above all, We proposed an cross-domain virtual network embedding algorithm based on topology and resource attributes. The simulation results depicts that our algorithm increases the acceptance rate of multi-domain virtual network requests, compared with the existing virtual network embedding algorithm.

  12. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  13. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Dejan Pecevski

    2011-12-01

    Full Text Available An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away" and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  14. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-12-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away") and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  15. Hybrid Spintronic-CMOS Spiking Neural Network with On-Chip Learning: Devices, Circuits, and Systems

    Science.gov (United States)

    Sengupta, Abhronil; Banerjee, Aparajita; Roy, Kaushik

    2016-12-01

    Over the past decade, spiking neural networks (SNNs) have emerged as one of the popular architectures to emulate the brain. In SNNs, information is temporally encoded and communication between neurons is accomplished by means of spikes. In such networks, spike-timing-dependent plasticity mechanisms require the online programing of synapses based on the temporal information of spikes transmitted by spiking neurons. In this work, we propose a spintronic synapse with decoupled spike-transmission and programing-current paths. The spintronic synapse consists of a ferromagnet-heavy-metal heterostructure where the programing current through the heavy metal generates spin-orbit torque to modulate the device conductance. Low programing energy and fast programing times demonstrate the efficacy of the proposed device as a nanoelectronic synapse. We perform a simulation study based on an experimentally benchmarked device-simulation framework to demonstrate the interfacing of such spintronic synapses with CMOS neurons and learning circuits operating in the transistor subthreshold region to form a network of spiking neurons that can be utilized for pattern-recognition problems.

  16. Spike propagation in driven chain networks with dominant global inhibition

    International Nuclear Information System (INIS)

    Chang Wonil; Jin, Dezhe Z.

    2009-01-01

    Spike propagation in chain networks is usually studied in the synfire regime, in which successive groups of neurons are synaptically activated sequentially through the unidirectional excitatory connections. Here we study the dynamics of chain networks with dominant global feedback inhibition that prevents the synfire activity. Neural activity is driven by suprathreshold external inputs. We analytically and numerically demonstrate that spike propagation along the chain is a unique dynamical attractor in a wide parameter regime. The strong inhibition permits a robust winner-take-all propagation in the case of multiple chains competing via the inhibition.

  17. [A wavelet neural network algorithm of EEG signals data compression and spikes recognition].

    Science.gov (United States)

    Zhang, Y; Liu, A; Yu, K

    1999-06-01

    A novel method of EEG signals compression representation and epileptiform spikes recognition based on wavelet neural network and its algorithm is presented. The wavelet network not only can compress data effectively but also can recover original signal. In addition, the characters of the spikes and the spike-slow rhythm are auto-detected from the time-frequency isoline of EEG signal. This method is well worth using in the field of the electrophysiological signal processing and time-frequency analyzing.

  18. Macroscopic phase-resetting curves for spiking neural networks

    Science.gov (United States)

    Dumont, Grégory; Ermentrout, G. Bard; Gutkin, Boris

    2017-10-01

    The study of brain rhythms is an open-ended, and challenging, subject of interest in neuroscience. One of the best tools for the understanding of oscillations at the single neuron level is the phase-resetting curve (PRC). Synchronization in networks of neurons, effects of noise on the rhythms, effects of transient stimuli on the ongoing rhythmic activity, and many other features can be understood by the PRC. However, most macroscopic brain rhythms are generated by large populations of neurons, and so far it has been unclear how the PRC formulation can be extended to these more common rhythms. In this paper, we describe a framework to determine a macroscopic PRC (mPRC) for a network of spiking excitatory and inhibitory neurons that generate a macroscopic rhythm. We take advantage of a thermodynamic approach combined with a reduction method to simplify the network description to a small number of ordinary differential equations. From this simplified but exact reduction, we can compute the mPRC via the standard adjoint method. Our theoretical findings are illustrated with and supported by numerical simulations of the full spiking network. Notably our mPRC framework allows us to predict the difference between effects of transient inputs to the excitatory versus the inhibitory neurons in the network.

  19. Macroscopic phase-resetting curves for spiking neural networks.

    Science.gov (United States)

    Dumont, Grégory; Ermentrout, G Bard; Gutkin, Boris

    2017-10-01

    The study of brain rhythms is an open-ended, and challenging, subject of interest in neuroscience. One of the best tools for the understanding of oscillations at the single neuron level is the phase-resetting curve (PRC). Synchronization in networks of neurons, effects of noise on the rhythms, effects of transient stimuli on the ongoing rhythmic activity, and many other features can be understood by the PRC. However, most macroscopic brain rhythms are generated by large populations of neurons, and so far it has been unclear how the PRC formulation can be extended to these more common rhythms. In this paper, we describe a framework to determine a macroscopic PRC (mPRC) for a network of spiking excitatory and inhibitory neurons that generate a macroscopic rhythm. We take advantage of a thermodynamic approach combined with a reduction method to simplify the network description to a small number of ordinary differential equations. From this simplified but exact reduction, we can compute the mPRC via the standard adjoint method. Our theoretical findings are illustrated with and supported by numerical simulations of the full spiking network. Notably our mPRC framework allows us to predict the difference between effects of transient inputs to the excitatory versus the inhibitory neurons in the network.

  20. How adaptation shapes spike rate oscillations in recurrent neuronal networks

    Directory of Open Access Journals (Sweden)

    Moritz eAugustin

    2013-02-01

    Full Text Available Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 Hz to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.

  1. Energy-aware virtual network embedding in flexi-grid optical networks

    Science.gov (United States)

    Lin, Rongping; Luo, Shan; Wang, Haoran; Wang, Sheng; Chen, Bin

    2018-01-01

    Virtual network embedding (VNE) problem is to map multiple heterogeneous virtual networks (VN) on a shared substrate network, which mitigate the ossification of the substrate network. Meanwhile, energy efficiency has been widely considered in the network design. In this paper, we aim to solve the energy-aware VNE problem in flexi-grid optical networks. We provide an integer linear programming (ILP) formulation to minimize the power increment of each arriving VN request. We also propose a polynomial-time heuristic algorithm where virtual links are embedded sequentially to keep a reasonable acceptance ratio and maintain a low energy consumption. Numerical results show the functionality of the heuristic algorithm in a 24-node network.

  2. Spiking sychronization regulated by noise in three types of Hodgkin—Huxley neuronal networks

    International Nuclear Information System (INIS)

    Zhang Zheng-Zhen; Zeng Shang-You; Tang Wen-Yan; Hu Jin-Lin; Zeng Shao-Wen; Ning Wei-Lian; Qiu Yi; Wu Hui-Si

    2012-01-01

    In this paper, we study spiking synchronization in three different types of Hodgkin—Huxley neuronal networks, which are the small-world, regular, and random neuronal networks. All the neurons are subjected to subthreshold stimulus and external noise. It is found that in each of all the neuronal networks there is an optimal strength of noise to induce the maximal spiking synchronization. We further demonstrate that in each of the neuronal networks there is a range of synaptic conductance to induce the effect that an optimal strength of noise maximizes the spiking synchronization. Only when the magnitude of the synaptic conductance is moderate, will the effect be considerable. However, if the synaptic conductance is small or large, the effect vanishes. As the connections between neurons increase, the synaptic conductance to maximize the effect decreases. Therefore, we show quantitatively that the noise-induced maximal synchronization in the Hodgkin—Huxley neuronal network is a general effect, regardless of the specific type of neuronal network

  3. Energy-aware virtual network embedding in flexi-grid networks.

    Science.gov (United States)

    Lin, Rongping; Luo, Shan; Wang, Haoran; Wang, Sheng

    2017-11-27

    Network virtualization technology has been proposed to allow multiple heterogeneous virtual networks (VNs) to coexist on a shared substrate network, which increases the utilization of the substrate network. Efficiently mapping VNs on the substrate network is a major challenge on account of the VN embedding (VNE) problem. Meanwhile, energy efficiency has been widely considered in the network design in terms of operation expenses and the ecological awareness. In this paper, we aim to solve the energy-aware VNE problem in flexi-grid optical networks. We provide an integer linear programming (ILP) formulation to minimize the electricity cost of each arriving VN request. We also propose a polynomial-time heuristic algorithm where virtual links are embedded sequentially to keep a reasonable acceptance ratio and maintain a low electricity cost. Numerical results show that the heuristic algorithm performs closely to the ILP for a small size network, and we also demonstrate its applicability to larger networks.

  4. Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.

    Science.gov (United States)

    Gangopadhyay, Ahana; Chakrabartty, Shantanu

    2017-04-27

    This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of ΣΔ modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.

  5. Chaos and reliability in balanced spiking networks with temporal drive.

    Science.gov (United States)

    Lajoie, Guillaume; Lin, Kevin K; Shea-Brown, Eric

    2013-05-01

    Biological information processing is often carried out by complex networks of interconnected dynamical units. A basic question about such networks is that of reliability: If the same signal is presented many times with the network in different initial states, will the system entrain to the signal in a repeatable way? Reliability is of particular interest in neuroscience, where large, complex networks of excitatory and inhibitory cells are ubiquitous. These networks are known to autonomously produce strongly chaotic dynamics-an obvious threat to reliability. Here, we show that such chaos persists in the presence of weak and strong stimuli, but that even in the presence of chaos, intermittent periods of highly reliable spiking often coexist with unreliable activity. We elucidate the local dynamical mechanisms involved in this intermittent reliability, and investigate the relationship between this phenomenon and certain time-dependent attractors arising from the dynamics. A conclusion is that chaotic dynamics do not have to be an obstacle to precise spike responses, a fact with implications for signal coding in large networks.

  6. A Reinforcement Learning Framework for Spiking Networks with Dynamic Synapses

    Directory of Open Access Journals (Sweden)

    Karim El-Laithy

    2011-01-01

    Full Text Available An integration of both the Hebbian-based and reinforcement learning (RL rules is presented for dynamic synapses. The proposed framework permits the Hebbian rule to update the hidden synaptic model parameters regulating the synaptic response rather than the synaptic weights. This is performed using both the value and the sign of the temporal difference in the reward signal after each trial. Applying this framework, a spiking network with spike-timing-dependent synapses is tested to learn the exclusive-OR computation on a temporally coded basis. Reward values are calculated with the distance between the output spike train of the network and a reference target one. Results show that the network is able to capture the required dynamics and that the proposed framework can reveal indeed an integrated version of Hebbian and RL. The proposed framework is tractable and less computationally expensive. The framework is applicable to a wide class of synaptic models and is not restricted to the used neural representation. This generality, along with the reported results, supports adopting the introduced approach to benefit from the biologically plausible synaptic models in a wide range of intuitive signal processing.

  7. Pulsed neural networks consisting of single-flux-quantum spiking neurons

    International Nuclear Information System (INIS)

    Hirose, T.; Asai, T.; Amemiya, Y.

    2007-01-01

    An inhibitory pulsed neural network was developed for brain-like information processing, by using single-flux-quantum (SFQ) circuits. It consists of spiking neuron devices that are coupled to each other through all-to-all inhibitory connections. The network selects neural activity. The operation of the neural network was confirmed by computer simulation. SFQ neuron devices can imitate the operation of the inhibition phenomenon of neural networks

  8. Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances

    International Nuclear Information System (INIS)

    Fiete, Ila R.; Seung, H. Sebastian

    2006-01-01

    We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of 'empiric' synapses driven by random spike trains from an external source

  9. Spiking Regularity and Coherence in Complex Hodgkin–Huxley Neuron Networks

    International Nuclear Information System (INIS)

    Zhi-Qiang, Sun; Ping, Xie; Wei, Li; Peng-Ye, Wang

    2010-01-01

    We study the effects of the strength of coupling between neurons on the spiking regularity and coherence in a complex network with randomly connected Hodgkin–Huxley neurons driven by colored noise. It is found that for the given topology realization and colored noise correlation time, there exists an optimal strength of coupling, at which the spiking regularity of the network reaches the best level. Moreover, when the temporal regularity reaches the best level, the spatial coherence of the system has already increased to a relatively high level. In addition, for the given number of neurons and noise correlation time, the values of average regularity and spatial coherence at the optimal strength of coupling are nearly independent of the topology realization. Furthermore, there exists an optimal value of colored noise correlation time at which the spiking regularity can reach its best level. These results may be helpful for understanding of the real neuron world. (cross-disciplinary physics and related areas of science and technology)

  10. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.

    Science.gov (United States)

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

  11. Directional spike propagation in a recurrent network: dynamical firewall as anisotropic recurrent inhibition.

    Science.gov (United States)

    Samura, Toshikazu; Hayashi, Hatsuo

    2012-09-01

    It has been demonstrated that theta rhythm propagates along the septotemporal axis of the hippocampal CA1 of the rat running on a track, and it has been suggested that directional spike propagation in the hippocampal CA3 is reflected in CA1. In this paper, we show that directional spike propagation occurs in a recurrent network model in which neurons are connected locally and connection weights are modified through STDP. The recurrent network model consists of excitatory and inhibitory neurons, which are intrinsic bursting and fast spiking neurons developed by Izhikevich, respectively. The maximum length of connections from excitatory neurons is shorter in the horizontal direction than the vertical direction. Connections from inhibitory neurons have the same maximum length in both directions, and the maximum length of inhibitory connections is the same as that of excitatory connections in the vertical direction. When connection weights between excitatory neurons (E→E) were modified through STDP and those from excitatory neurons to inhibitory neurons (E→I) were constant, spikes propagated in the vertical direction as expected from the network structure. However, when E→I connection weights were modified through STDP, as well as E→E connection weights, spikes propagated in the horizontal direction against the above expectation. This paradoxical propagation was produced by strengthened E→I connections which shifted the timing of inhibition forward. When E→I connections are enhanced, the direction of effective inhibition changes from horizontal to vertical, as if a gate for spike propagation is opened in the horizontal direction and firewalls come out in the vertical direction. These results suggest that the advance of timing of inhibition caused by potentiation of E→I connections is influential in network activity and is an important element in determining the direction of spike propagation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Impacts of clustering on noise-induced spiking regularity in the excitatory neuronal networks of subnetworks.

    Science.gov (United States)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2015-01-01

    In this paper, we investigate how clustering factors influent spiking regularity of the neuronal network of subnetworks. In order to do so, we fix the averaged coupling probability and the averaged coupling strength, and take the cluster number M, the ratio of intra-connection probability and inter-connection probability R, the ratio of intra-coupling strength and inter-coupling strength S as controlled parameters. With the obtained simulation results, we find that spiking regularity of the neuronal networks has little variations with changing of R and S when M is fixed. However, cluster number M could reduce the spiking regularity to low level when the uniform neuronal network's spiking regularity is at high level. Combined the obtained results, we can see that clustering factors have little influences on the spiking regularity when the entire energy is fixed, which could be controlled by the averaged coupling strength and the averaged connection probability.

  13. Spike timing analysis in neural networks with unsupervised synaptic plasticity

    Science.gov (United States)

    Mizusaki, B. E. P.; Agnes, E. J.; Brunnet, L. G.; Erichsen, R., Jr.

    2013-01-01

    The synaptic plasticity rules that sculpt a neural network architecture are key elements to understand cortical processing, as they may explain the emergence of stable, functional activity, while avoiding runaway excitation. For an associative memory framework, they should be built in a way as to enable the network to reproduce a robust spatio-temporal trajectory in response to an external stimulus. Still, how these rules may be implemented in recurrent networks and the way they relate to their capacity of pattern recognition remains unclear. We studied the effects of three phenomenological unsupervised rules in sparsely connected recurrent networks for associative memory: spike-timing-dependent-plasticity, short-term-plasticity and an homeostatic scaling. The system stability is monitored during the learning process of the network, as the mean firing rate converges to a value determined by the homeostatic scaling. Afterwards, it is possible to measure the recovery efficiency of the activity following each initial stimulus. This is evaluated by a measure of the correlation between spike fire timings, and we analysed the full memory separation capacity and limitations of this system.

  14. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Jan eHahne

    2015-09-01

    Full Text Available Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy...

  15. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.

    Science.gov (United States)

    Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin

    2018-01-01

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as

  16. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks

    Directory of Open Access Journals (Sweden)

    Rodrigo F. O. Pena

    2018-03-01

    Full Text Available Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i different neural subpopulations (e.g., excitatory and inhibitory neurons have different cellular or connectivity parameters; (ii the number and strength of the input connections are random (Erdős-Rényi topology and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of

  17. Propagation of spiking regularity and double coherence resonance in feedforward networks.

    Science.gov (United States)

    Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok

    2012-03-01

    We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.

  18. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  19. Sequentially switching cell assemblies in random inhibitory networks of spiking neurons in the striatum.

    Science.gov (United States)

    Ponzi, Adam; Wickens, Jeff

    2010-04-28

    The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.

  20. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    Science.gov (United States)

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  1. Spiking in auditory cortex following thalamic stimulation is dominated by cortical network activity

    Science.gov (United States)

    Krause, Bryan M.; Raz, Aeyal; Uhlrich, Daniel J.; Smith, Philip H.; Banks, Matthew I.

    2014-01-01

    The state of the sensory cortical network can have a profound impact on neural responses and perception. In rodent auditory cortex, sensory responses are reported to occur in the context of network events, similar to brief UP states, that produce “packets” of spikes and are associated with synchronized synaptic input (Bathellier et al., 2012; Hromadka et al., 2013; Luczak et al., 2013). However, traditional models based on data from visual and somatosensory cortex predict that ascending sensory thalamocortical (TC) pathways sequentially activate cells in layers 4 (L4), L2/3, and L5. The relationship between these two spatio-temporal activity patterns is unclear. Here, we used calcium imaging and electrophysiological recordings in murine auditory TC brain slices to investigate the laminar response pattern to stimulation of TC afferents. We show that although monosynaptically driven spiking in response to TC afferents occurs, the vast majority of spikes fired following TC stimulation occurs during brief UP states and outside the context of the L4>L2/3>L5 activation sequence. Specifically, monosynaptic subthreshold TC responses with similar latencies were observed throughout layers 2–6, presumably via synapses onto dendritic processes located in L3 and L4. However, monosynaptic spiking was rare, and occurred primarily in L4 and L5 non-pyramidal cells. By contrast, during brief, TC-induced UP states, spiking was dense and occurred primarily in pyramidal cells. These network events always involved infragranular layers, whereas involvement of supragranular layers was variable. During UP states, spike latencies were comparable between infragranular and supragranular cells. These data are consistent with a model in which activation of auditory cortex, especially supragranular layers, depends on internally generated network events that represent a non-linear amplification process, are initiated by infragranular cells and tightly regulated by feed-forward inhibitory

  2. Topological Embedding Feature Based Resource Allocation in Network Virtualization

    Directory of Open Access Journals (Sweden)

    Hongyan Cui

    2014-01-01

    Full Text Available Virtualization provides a powerful way to run multiple virtual networks on a shared substrate network, which needs accurate and efficient mathematical models. Virtual network embedding is a challenge in network virtualization. In this paper, considering the degree of convergence when mapping a virtual network onto substrate network, we propose a new embedding algorithm based on topology mapping convergence-degree. Convergence-degree means the adjacent degree of virtual network’s nodes when they are mapped onto a substrate network. The contributions of our method are as below. Firstly, we map virtual nodes onto the substrate nodes with the maximum convergence-degree. The simulation results show that our proposed algorithm largely enhances the network utilization efficiency and decreases the complexity of the embedding problem. Secondly, we define the load balance rate to reflect the load balance of substrate links. The simulation results show our proposed algorithm achieves better load balance. Finally, based on the feature of star topology, we further improve our embedding algorithm and make it suitable for application in the star topology. The test result shows it gets better performance than previous works.

  3. Stimulus Sensitivity of a Spiking Neural Network Model

    Science.gov (United States)

    Chevallier, Julien

    2018-02-01

    Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.

  4. The extreme vulnerability of interdependent spatially embedded networks

    Science.gov (United States)

    Bashan, Amir; Berezin, Yehiel; Buldyrev, Sergey V.; Havlin, Shlomo

    2013-10-01

    Recent studies show that in interdependent networks a very small failure in one network may lead to catastrophic consequences. Above a critical fraction of interdependent nodes, even a single node failure can invoke cascading failures that may abruptly fragment the system, whereas below this critical dependency a failure of a few nodes leads only to a small amount of damage to the system. So far, research has focused on interdependent random networks without space limitations. However, many real systems, such as power grids and the Internet, are not random but are spatially embedded. Here we analytically and numerically study the stability of interdependent spatially embedded networks modelled as lattice networks. Surprisingly, we find that in lattice systems, in contrast to non-embedded systems, there is no critical dependency and any small fraction of interdependent nodes leads to an abrupt collapse. We show that this extreme vulnerability of very weakly coupled lattices is a consequence of the critical exponent describing the percolation transition of a single lattice.

  5. A compound memristive synapse model for statistical learning through STDP in spiking neural networks.

    Science.gov (United States)

    Bill, Johannes; Legenstein, Robert

    2014-01-01

    Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP) with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network's spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic architectures.

  6. Column generation algorithms for virtual network embedding in flexi-grid optical networks.

    Science.gov (United States)

    Lin, Rongping; Luo, Shan; Zhou, Jingwei; Wang, Sheng; Chen, Bin; Zhang, Xiaoning; Cai, Anliang; Zhong, Wen-De; Zukerman, Moshe

    2018-04-16

    Network virtualization provides means for efficient management of network resources by embedding multiple virtual networks (VNs) to share efficiently the same substrate network. Such virtual network embedding (VNE) gives rise to a challenging problem of how to optimize resource allocation to VNs and to guarantee their performance requirements. In this paper, we provide VNE algorithms for efficient management of flexi-grid optical networks. We provide an exact algorithm aiming to minimize the total embedding cost in terms of spectrum cost and computation cost for a single VN request. Then, to achieve scalability, we also develop a heuristic algorithm for the same problem. We apply these two algorithms for a dynamic traffic scenario where many VN requests arrive one-by-one. We first demonstrate by simulations for the case of a six-node network that the heuristic algorithm obtains very close blocking probabilities to exact algorithm (about 0.2% higher). Then, for a network of realistic size (namely, USnet) we demonstrate that the blocking probability of our new heuristic algorithm is about one magnitude lower than a simpler heuristic algorithm, which was a component of an earlier published algorithm.

  7. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.

    Science.gov (United States)

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

  8. Dynamic virtual optical network embedding in spectral and spatial domains over elastic optical networks with multicore fibers

    Science.gov (United States)

    Zhu, Ruijie; Zhao, Yongli; Yang, Hui; Tan, Yuanlong; Chen, Haoran; Zhang, Jie; Jue, Jason P.

    2016-08-01

    Network virtualization can eradicate the ossification of the infrastructure and stimulate innovation of new network architectures and applications. Elastic optical networks (EONs) are ideal substrate networks for provisioning flexible virtual optical network (VON) services. However, as network traffic continues to increase exponentially, the capacity of EONs will reach the physical limitation soon. To further increase network flexibility and capacity, the concept of EONs is extended into the spatial domain. How to map the VON onto substrate networks by thoroughly using the spectral and spatial resources is extremely important. This process is called VON embedding (VONE).Considering the two kinds of resources at the same time during the embedding process, we propose two VONE algorithms, the adjacent link embedding algorithm (ALEA) and the remote link embedding algorithm (RLEA). First, we introduce a model to solve the VONE problem. Then we design the embedding ability measurement of network elements. Based on the network elements' embedding ability, two VONE algorithms were proposed. Simulation results show that the proposed VONE algorithms could achieve better performance than the baseline algorithm in terms of blocking probability and revenue-to-cost ratio.

  9. A network of spiking neurons that can represent interval timing: mean field analysis.

    Science.gov (United States)

    Gavornik, Jeffrey P; Shouval, Harel Z

    2011-04-01

    Despite the vital importance of our ability to accurately process and encode temporal information, the underlying neural mechanisms are largely unknown. We have previously described a theoretical framework that explains how temporal representations, similar to those reported in the visual cortex, can form in locally recurrent cortical networks as a function of reward modulated synaptic plasticity. This framework allows networks of both linear and spiking neurons to learn the temporal interval between a stimulus and paired reward signal presented during training. Here we use a mean field approach to analyze the dynamics of non-linear stochastic spiking neurons in a network trained to encode specific time intervals. This analysis explains how recurrent excitatory feedback allows a network structure to encode temporal representations.

  10. Adaptive robotic control driven by a versatile spiking cerebellar network.

    Directory of Open Access Journals (Sweden)

    Claudia Casellato

    Full Text Available The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning, a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.

  11. Adaptive robotic control driven by a versatile spiking cerebellar network.

    Science.gov (United States)

    Casellato, Claudia; Antonietti, Alberto; Garrido, Jesus A; Carrillo, Richard R; Luque, Niceto R; Ros, Eduardo; Pedrocchi, Alessandra; D'Angelo, Egidio

    2014-01-01

    The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.

  12. A Reconfigurable and Biologically Inspired Paradigm for Computation Using Network-On-Chip and Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Jim Harkin

    2009-01-01

    Full Text Available FPGA devices have emerged as a popular platform for the rapid prototyping of biological Spiking Neural Networks (SNNs applications, offering the key requirement of reconfigurability. However, FPGAs do not efficiently realise the biologically plausible neuron and synaptic models of SNNs, and current FPGA routing structures cannot accommodate the high levels of interneuron connectivity inherent in complex SNNs. This paper highlights and discusses the current challenges of implementing scalable SNNs on reconfigurable FPGAs. The paper proposes a novel field programmable neural network architecture (EMBRACE, incorporating low-power analogue spiking neurons, interconnected using a Network-on-Chip architecture. Results on the evaluation of the EMBRACE architecture using the XOR benchmark problem are presented, and the performance of the architecture is discussed. The paper also discusses the adaptability of the EMBRACE architecture in supporting fault tolerant computing.

  13. Training spiking neural networks to associate spatio-temporal input-output spike patterns

    OpenAIRE

    Mohemmed, A; Schliebs, S; Matsuda, S; Kasabov, N

    2013-01-01

    In a previous work (Mohemmed et al., Method for training a spiking neuron to associate input–output spike trains) [1] we have proposed a supervised learning algorithm based on temporal coding to train a spiking neuron to associate input spatiotemporal spike patterns to desired output spike patterns. The algorithm is based on the conversion of spike trains into analogue signals and the application of the Widrow–Hoff learning rule. In this paper we present a mathematical formulation of the prop...

  14. Phase diagram of spiking neural networks.

    Science.gov (United States)

    Seyed-Allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters - excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli.

  15. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms

    Directory of Open Access Journals (Sweden)

    Evangelos eStromatias

    2015-07-01

    Full Text Available Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks requires vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost 2 bits, and shows that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

  16. Mapping cortical mesoscopic networks of single spiking cortical or sub-cortical neurons.

    Science.gov (United States)

    Xiao, Dongsheng; Vanni, Matthieu P; Mitelut, Catalin C; Chan, Allen W; LeDue, Jeffrey M; Xie, Yicheng; Chen, Andrew Cn; Swindale, Nicholas V; Murphy, Timothy H

    2017-02-04

    Understanding the basis of brain function requires knowledge of cortical operations over wide-spatial scales, but also within the context of single neurons. In vivo, wide-field GCaMP imaging and sub-cortical/cortical cellular electrophysiology were used in mice to investigate relationships between spontaneous single neuron spiking and mesoscopic cortical activity. We make use of a rich set of cortical activity motifs that are present in spontaneous activity in anesthetized and awake animals. A mesoscale spike-triggered averaging procedure allowed the identification of motifs that are preferentially linked to individual spiking neurons by employing genetically targeted indicators of neuronal activity. Thalamic neurons predicted and reported specific cycles of wide-scale cortical inhibition/excitation. In contrast, spike-triggered maps derived from single cortical neurons yielded spatio-temporal maps expected for regional cortical consensus function. This approach can define network relationships between any point source of neuronal spiking and mesoscale cortical maps.

  17. Improved SpikeProp for Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Falah Y. H. Ahmed

    2013-01-01

    Full Text Available A spiking neurons network encodes information in the timing of individual spike times. A novel supervised learning rule for SpikeProp is derived to overcome the discontinuities introduced by the spiking thresholding. This algorithm is based on an error-backpropagation learning rule suited for supervised learning of spiking neurons that use exact spike time coding. The SpikeProp is able to demonstrate the spiking neurons that can perform complex nonlinear classification in fast temporal coding. This study proposes enhancements of SpikeProp learning algorithm for supervised training of spiking networks which can deal with complex patterns. The proposed methods include the SpikeProp particle swarm optimization (PSO and angle driven dependency learning rate. These methods are presented to SpikeProp network for multilayer learning enhancement and weights optimization. Input and output patterns are encoded as spike trains of precisely timed spikes, and the network learns to transform the input trains into target output trains. With these enhancements, our proposed methods outperformed other conventional neural network architectures.

  18. A complex-valued firing-rate model that approximates the dynamics of spiking networks.

    Directory of Open Access Journals (Sweden)

    Evan S Schaffer

    2013-10-01

    Full Text Available Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons.

  19. A complex-valued firing-rate model that approximates the dynamics of spiking networks.

    Science.gov (United States)

    Schaffer, Evan S; Ostojic, Srdjan; Abbott, L F

    2013-10-01

    Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons.

  20. Exporting embedded in culture and transnational networks around entrepreneurs

    DEFF Research Database (Denmark)

    Ashourizadeh, Shayegheh; Schøtt, Thomas

    2016-01-01

    from networking in the market, professions and work-place, but is impeded by networking for advice in the private sphere. Exporting is embedded in culture in the way that benefits of transnational networking for exporting are higher in secular-rational culture than in traditional culture. This study....... This dynamic unfolds in the context of culture, which expectedly moderates benefit of networks for exporting. Networking for advice was surveyed in the Global Entrepreneurship Monitor in 61 societies with 52,968 entrepreneurs. Exporting greatly benefits from transnational networks around entrepreneurs and also...... generalises to the entrepreneurs in the world, and is a first to account for embedding of exporting in transnational advisory networks in combination with culture....

  1. Embedded generation connection incentives for distribution network operators

    Energy Technology Data Exchange (ETDEWEB)

    Williams, P.; Andrews, S.

    2002-07-01

    This is the final report with respect to work commissioned by the Department of Trade and Industry (DTI) as part of the New and Renewable Energy Programme into incentives for distribution network operators (DNOs) for the connection of embedded generation. This report, which incorporates the contents of the interim report submitted in February 2002, considers the implications of changes in the structure and regulation in the UK electricity industry on the successful technical and commercial integrated of embedded generation into distribution networks. The report examines: the obligations of public electricity suppliers (PESs); current DNO practices regarding the connection of embedded generation; the changes introduced by the Utilities Act 2000, including the impact of new obligations placed on DNOs on the connection of embedded generation and the requirements of the new Electricity Distribution Standard Licence conditions; and problems and prospects for DNO incentives.

  2. A reanalysis of “Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons” [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Rainer Engelken

    2016-08-01

    Full Text Available Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  3. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks

    Directory of Open Access Journals (Sweden)

    Runchun Mark Wang

    2015-05-01

    Full Text Available We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP and Spike Timing Dependent Delay Plasticity (STDDP. We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 2^26 (64M synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted and/or delayed pre-synaptic spike to the target synapse in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 2^36 (64G synaptic adaptors on a current high-end FPGA platform.

  4. A compound memristive synapse model for statistical learning through STDP in spiking neural networks

    Directory of Open Access Journals (Sweden)

    Johannes eBill

    2014-12-01

    Full Text Available Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network’s spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic

  5. Spatial network surrogates for disentangling complex system structure from spatial embedding of nodes

    Science.gov (United States)

    Wiedermann, Marc; Donges, Jonathan F.; Kurths, Jürgen; Donner, Reik V.

    2016-04-01

    Networks with nodes embedded in a metric space have gained increasing interest in recent years. The effects of spatial embedding on the networks' structural characteristics, however, are rarely taken into account when studying their macroscopic properties. Here, we propose a hierarchy of null models to generate random surrogates from a given spatially embedded network that can preserve certain global and local statistics associated with the nodes' embedding in a metric space. Comparing the original network's and the resulting surrogates' global characteristics allows one to quantify to what extent these characteristics are already predetermined by the spatial embedding of the nodes and links. We apply our framework to various real-world spatial networks and show that the proposed models capture macroscopic properties of the networks under study much better than standard random network models that do not account for the nodes' spatial embedding. Depending on the actual performance of the proposed null models, the networks are categorized into different classes. Since many real-world complex networks are in fact spatial networks, the proposed approach is relevant for disentangling the underlying complex system structure from spatial embedding of nodes in many fields, ranging from social systems over infrastructure and neurophysiology to climatology.

  6. Extracting functionally feedforward networks from a population of spiking neurons.

    Science.gov (United States)

    Vincent, Kathleen; Tauskela, Joseph S; Thivierge, Jean-Philippe

    2012-01-01

    Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV

  7. An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data

    Directory of Open Access Journals (Sweden)

    Evangelos Stromatias

    2017-06-01

    Full Text Available This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77% and Poker-DVS (100% real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.

  8. An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data.

    Science.gov (United States)

    Stromatias, Evangelos; Soto, Miguel; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabé

    2017-01-01

    This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.

  9. Multi-layer network utilizing rewarded spike time dependent plasticity to learn a foraging task.

    Directory of Open Access Journals (Sweden)

    Pavel Sanda

    2017-09-01

    Full Text Available Neural networks with a single plastic layer employing reward modulated spike time dependent plasticity (STDP are capable of learning simple foraging tasks. Here we demonstrate advanced pattern discrimination and continuous learning in a network of spiking neurons with multiple plastic layers. The network utilized both reward modulated and non-reward modulated STDP and implemented multiple mechanisms for homeostatic regulation of synaptic efficacy, including heterosynaptic plasticity, gain control, output balancing, activity normalization of rewarded STDP and hard limits on synaptic strength. We found that addition of a hidden layer of neurons employing non-rewarded STDP created neurons that responded to the specific combinations of inputs and thus performed basic classification of the input patterns. When combined with a following layer of neurons implementing rewarded STDP, the network was able to learn, despite the absence of labeled training data, discrimination between rewarding patterns and the patterns designated as punishing. Synaptic noise allowed for trial-and-error learning that helped to identify the goal-oriented strategies which were effective in task solving. The study predicts a critical set of properties of the spiking neuronal network with STDP that was sufficient to solve a complex foraging task involving pattern classification and decision making.

  10. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Directory of Open Access Journals (Sweden)

    Yoonsik Shim

    2016-10-01

    Full Text Available We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP. The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  11. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Science.gov (United States)

    Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil

    2016-10-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  12. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    Science.gov (United States)

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  13. Silicon synaptic transistor for hardware-based spiking neural network and neuromorphic system

    Science.gov (United States)

    Kim, Hyungjin; Hwang, Sungmin; Park, Jungjin; Park, Byung-Gook

    2017-10-01

    Brain-inspired neuromorphic systems have attracted much attention as new computing paradigms for power-efficient computation. Here, we report a silicon synaptic transistor with two electrically independent gates to realize a hardware-based neural network system without any switching components. The spike-timing dependent plasticity characteristics of the synaptic devices are measured and analyzed. With the help of the device model based on the measured data, the pattern recognition capability of the hardware-based spiking neural network systems is demonstrated using the modified national institute of standards and technology handwritten dataset. By comparing systems with and without inhibitory synapse part, it is confirmed that the inhibitory synapse part is an essential element in obtaining effective and high pattern classification capability.

  14. A stochastic-field description of finite-size spiking neural networks.

    Science.gov (United States)

    Dumont, Grégory; Payeur, Alexandre; Longtin, André

    2017-08-01

    Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity-the density of active neurons per unit time-is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics.

  15. Learning Universal Computations with Spikes

    Science.gov (United States)

    Thalmeier, Dominik; Uhlmann, Marvin; Kappen, Hilbert J.; Memmesheimer, Raoul-Martin

    2016-01-01

    Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them. PMID:27309381

  16. Evolving Spiking Neural Networks for Recognition of Aged Voices.

    Science.gov (United States)

    Silva, Marco; Vellasco, Marley M B R; Cataldo, Edson

    2017-01-01

    The aging of the voice, known as presbyphonia, is a natural process that can cause great change in vocal quality of the individual. This is a relevant problem to those people who use their voices professionally, and its early identification can help determine a suitable treatment to avoid its progress or even to eliminate the problem. This work focuses on the development of a new model for the identification of aging voices (independently of their chronological age), using as input attributes parameters extracted from the voice and glottal signals. The proposed model, named Quantum binary-real evolving Spiking Neural Network (QbrSNN), is based on spiking neural networks (SNNs), with an unsupervised training algorithm, and a Quantum-Inspired Evolutionary Algorithm that automatically determines the most relevant attributes and the optimal parameters that configure the SNN. The QbrSNN model was evaluated in a database composed of 120 records, containing samples from three groups of speakers. The results obtained indicate that the proposed model provides better accuracy than other approaches, with fewer input attributes. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves.

    Science.gov (United States)

    Paraskevov, A V; Zendrikov, D K

    2017-03-23

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  18. Synaptic convergence regulates synchronization-dependent spike transfer in feedforward neural networks.

    Science.gov (United States)

    Sailamul, Pachaya; Jang, Jaeson; Paik, Se-Bum

    2017-12-01

    Correlated neural activities such as synchronizations can significantly alter the characteristics of spike transfer between neural layers. However, it is not clear how this synchronization-dependent spike transfer can be affected by the structure of convergent feedforward wiring. To address this question, we implemented computer simulations of model neural networks: a source and a target layer connected with different types of convergent wiring rules. In the Gaussian-Gaussian (GG) model, both the connection probability and the strength are given as Gaussian distribution as a function of spatial distance. In the Uniform-Constant (UC) and Uniform-Exponential (UE) models, the connection probability density is a uniform constant within a certain range, but the connection strength is set as a constant value or an exponentially decaying function, respectively. Then we examined how the spike transfer function is modulated under these conditions, while static or synchronized input patterns were introduced to simulate different levels of feedforward spike synchronization. We observed that the synchronization-dependent modulation of the transfer function appeared noticeably different for each convergence condition. The modulation of the spike transfer function was largest in the UC model, and smallest in the UE model. Our analysis showed that this difference was induced by the different spike weight distributions that was generated from convergent synapses in each model. Our results suggest that, the structure of the feedforward convergence is a crucial factor for correlation-dependent spike control, thus must be considered important to understand the mechanism of information transfer in the brain.

  19. Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition.

    Science.gov (United States)

    Kasabov, Nikola; Dhoble, Kshitij; Nuntalid, Nuttapod; Indiveri, Giacomo

    2013-05-01

    On-line learning and recognition of spatio- and spectro-temporal data (SSTD) is a very challenging task and an important one for the future development of autonomous machine learning systems with broad applications. Models based on spiking neural networks (SNN) have already proved their potential in capturing spatial and temporal data. One class of them, the evolving SNN (eSNN), uses a one-pass rank-order learning mechanism and a strategy to evolve a new spiking neuron and new connections to learn new patterns from incoming data. So far these networks have been mainly used for fast image and speech frame-based recognition. Alternative spike-time learning methods, such as Spike-Timing Dependent Plasticity (STDP) and its variant Spike Driven Synaptic Plasticity (SDSP), can also be used to learn spatio-temporal representations, but they usually require many iterations in an unsupervised or semi-supervised mode of learning. This paper introduces a new class of eSNN, dynamic eSNN, that utilise both rank-order learning and dynamic synapses to learn SSTD in a fast, on-line mode. The paper also introduces a new model called deSNN, that utilises rank-order learning and SDSP spike-time learning in unsupervised, supervised, or semi-supervised modes. The SDSP learning is used to evolve dynamically the network changing connection weights that capture spatio-temporal spike data clusters both during training and during recall. The new deSNN model is first illustrated on simple examples and then applied on two case study applications: (1) moving object recognition using address-event representation (AER) with data collected using a silicon retina device; (2) EEG SSTD recognition for brain-computer interfaces. The deSNN models resulted in a superior performance in terms of accuracy and speed when compared with other SNN models that use either rank-order or STDP learning. The reason is that the deSNN makes use of both the information contained in the order of the first input spikes

  20. Reconstruction of sparse connectivity in neural networks from spike train covariances

    International Nuclear Information System (INIS)

    Pernice, Volker; Rotter, Stefan

    2013-01-01

    The inference of causation from correlation is in general highly problematic. Correspondingly, it is difficult to infer the existence of physical synaptic connections between neurons from correlations in their activity. Covariances in neural spike trains and their relation to network structure have been the subject of intense research, both experimentally and theoretically. The influence of recurrent connections on covariances can be characterized directly in linear models, where connectivity in the network is described by a matrix of linear coupling kernels. However, as indirect connections also give rise to covariances, the inverse problem of inferring network structure from covariances can generally not be solved unambiguously. Here we study to what degree this ambiguity can be resolved if the sparseness of neural networks is taken into account. To reconstruct a sparse network, we determine the minimal set of linear couplings consistent with the measured covariances by minimizing the L 1 norm of the coupling matrix under appropriate constraints. Contrary to intuition, after stochastic optimization of the coupling matrix, the resulting estimate of the underlying network is directed, despite the fact that a symmetric matrix of count covariances is used for inference. The performance of the new method is best if connections are neither exceedingly sparse, nor too dense, and it is easily applicable for networks of a few hundred nodes. Full coupling kernels can be obtained from the matrix of full covariance functions. We apply our method to networks of leaky integrate-and-fire neurons in an asynchronous–irregular state, where spike train covariances are well described by a linear model. (paper)

  1. Synchronization in a non-uniform network of excitatory spiking neurons

    Science.gov (United States)

    Echeveste, Rodrigo; Gros, Claudius

    Spontaneous synchronization of pulse coupled elements is ubiquitous in nature and seems to be of vital importance for life. Networks of pacemaker cells in the heart, extended populations of southeast asian fireflies, and neuronal oscillations in cortical networks, are examples of this. In the present work, a rich repertoire of dynamical states with different degrees of synchronization are found in a network of excitatory-only spiking neurons connected in a non-uniform fashion. In particular, uncorrelated and partially correlated states are found without the need for inhibitory neurons or external currents. The phase transitions between these states, as well the robustness, stability, and response of the network to external stimulus are studied.

  2. Network Multifunctional Substation with Embedded System in Coal Mine

    Institute of Scientific and Technical Information of China (English)

    MENG Fan-rong; HUO Yan; ZHOU Yong

    2006-01-01

    In order to solve the problems of mining monitor and control systems during the construction process of digital mining combined with network and embedded technologies, the kernel access equipment of a mining monitor and control system was proposed and designed. It is the architecture of a mining embedded network multifunctional substation. This paper presents the design of hardware and software of the substation in detail. Finally, the system's efficiency was validated through experimentation.

  3. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    Science.gov (United States)

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  4. Formation of DNA-network embedding ferromagnetic Cobalt nano-particles

    Science.gov (United States)

    Kanki, Teruo; Tanaka, Hidekazu; Shirakawa, Hideaki; Sacho, Yu; Taniguchi, Masateru; Lee, Hea-Yeon; Kawai, Tomoji; Kang, Nam-Jung; Chen, Jinwoo

    2002-03-01

    Formation of DNA-network embedding ferromagnetic Cobalt nano-particles T. Kanki, Hidekazu. Tanaka, H. Shirakawa, Y. Sacho, M. Taniguchi, H. Lee, T. Kawai The Institute of Scientific and Industrial Research, Osaka University, Japan and Nam-Jung Kang, Jinwoo Chen Korea Advanced Institute of Science and Technology (KAIST), Korea DNA can be regarded as a naturally occurring and highly specific functional biopolymer and as a fine nano-wire. Moreover, it was found that large-scale DNA networks can be fabricated on mica surfaces. By using this network structure, we can expect to construct nano-scale assembly of functional nano particle, for example ferromagnetic Co nano particles, toward nano scale spin-electronics based on DNA circuits. When we formed DNA network by 250mg/ml DNA solution of poly(dG)-poly(dC) including ferromagnetic Co nano particles (diameter of 12nm), we have conformed the DNA network structure embedding Co nano-particles (height of about 12nm) by atomic force microscopy. On the other hand, we used 100mg/ml DNA solution, DNA can not connect each other, and many Co nano-particles exist without being embedded.

  5. Dynamics and spike trains statistics in conductance-based integrate-and-fire neural networks with chemical and electric synapses

    International Nuclear Information System (INIS)

    Cofré, Rodrigo; Cessac, Bruno

    2013-01-01

    We investigate the effect of electric synapses (gap junctions) on collective neuronal dynamics and spike statistics in a conductance-based integrate-and-fire neural network, driven by Brownian noise, where conductances depend upon spike history. We compute explicitly the time evolution operator and show that, given the spike-history of the network and the membrane potentials at a given time, the further dynamical evolution can be written in a closed form. We show that spike train statistics is described by a Gibbs distribution whose potential can be approximated with an explicit formula, when the noise is weak. This potential form encompasses existing models for spike trains statistics analysis such as maximum entropy models or generalized linear models (GLM). We also discuss the different types of correlations: those induced by a shared stimulus and those induced by neurons interactions

  6. An Embedded Multi-Agent Systems Based Industrial Wireless Sensor Network.

    Science.gov (United States)

    Taboun, Mohammed S; Brennan, Robert W

    2017-09-14

    With the emergence of cyber-physical systems, there has been a growing interest in network-connected devices. One of the key requirements of a cyber-physical device is the ability to sense its environment. Wireless sensor networks are a widely-accepted solution for this requirement. In this study, an embedded multi-agent systems-managed wireless sensor network is presented. A novel architecture is proposed, along with a novel wireless sensor network architecture. Active and passive wireless sensor node types are defined, along with their communication protocols, and two application-specific examples are presented. A series of three experiments is conducted to evaluate the performance of the agent-embedded wireless sensor network.

  7. Virtual Network Embedding via Monte Carlo Tree Search.

    Science.gov (United States)

    Haeri, Soroush; Trajkovic, Ljiljana

    2018-02-01

    Network virtualization helps overcome shortcomings of the current Internet architecture. The virtualized network architecture enables coexistence of multiple virtual networks (VNs) on an existing physical infrastructure. VN embedding (VNE) problem, which deals with the embedding of VN components onto a physical network, is known to be -hard. In this paper, we propose two VNE algorithms: MaVEn-M and MaVEn-S. MaVEn-M employs the multicommodity flow algorithm for virtual link mapping while MaVEn-S uses the shortest-path algorithm. They formalize the virtual node mapping problem by using the Markov decision process (MDP) framework and devise action policies (node mappings) for the proposed MDP using the Monte Carlo tree search algorithm. Service providers may adjust the execution time of the MaVEn algorithms based on the traffic load of VN requests. The objective of the algorithms is to maximize the profit of infrastructure providers. We develop a discrete event VNE simulator to implement and evaluate performance of MaVEn-M, MaVEn-S, and several recently proposed VNE algorithms. We introduce profitability as a new performance metric that captures both acceptance and revenue to cost ratios. Simulation results show that the proposed algorithms find more profitable solutions than the existing algorithms. Given additional computation time, they further improve embedding solutions.

  8. Mechanisms of Winner-Take-All and Group Selection in Neuronal Spiking Networks.

    Science.gov (United States)

    Chen, Yanqing

    2017-01-01

    A major function of central nervous systems is to discriminate different categories or types of sensory input. Neuronal networks accomplish such tasks by learning different sensory maps at several stages of neural hierarchy, such that different neurons fire selectively to reflect different internal or external patterns and states. The exact mechanisms of such map formation processes in the brain are not completely understood. Here we study the mechanism by which a simple recurrent/reentrant neuronal network accomplish group selection and discrimination to different inputs in order to generate sensory maps. We describe the conditions and mechanism of transition from a rhythmic epileptic state (in which all neurons fire synchronized and indiscriminately to any input) to a winner-take-all state in which only a subset of neurons fire for a specific input. We prove an analytic condition under which a stable bump solution and a winner-take-all state can emerge from the local recurrent excitation-inhibition interactions in a three-layer spiking network with distinct excitatory and inhibitory populations, and demonstrate the importance of surround inhibitory connection topology on the stability of dynamic patterns in spiking neural network.

  9. Scheduling of network access for feedback-based embedded systems

    Science.gov (United States)

    Liberatore, Vincenzo

    2002-07-01

    nd communication capabilities. Examples range from smart dust embedded in building materials to networks of appliances in the home. Embedded devices will be deployed in unprecedented numbers, will enable pervasive distributed computing, and will radically change the way people interact with the surrounding environment [EGH00a]. The paper targets embedded systems and their real-time (RT) communication requirements. RT requirements arise from the

  10. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  11. Anti-correlations in the degree distribution increase stimulus detection performance in noisy spiking neural networks.

    Science.gov (United States)

    Martens, Marijn B; Houweling, Arthur R; E Tiesinga, Paul H

    2017-02-01

    Neuronal circuits in the rodent barrel cortex are characterized by stable low firing rates. However, recent experiments show that short spike trains elicited by electrical stimulation in single neurons can induce behavioral responses. Hence, the underlying neural networks provide stability against internal fluctuations in the firing rate, while simultaneously making the circuits sensitive to small external perturbations. Here we studied whether stability and sensitivity are affected by the connectivity structure in recurrently connected spiking networks. We found that anti-correlation between the number of afferent (in-degree) and efferent (out-degree) synaptic connections of neurons increases stability against pathological bursting, relative to networks where the degrees were either positively correlated or uncorrelated. In the stable network state, stimulation of a few cells could lead to a detectable change in the firing rate. To quantify the ability of networks to detect the stimulation, we used a receiver operating characteristic (ROC) analysis. For a given level of background noise, networks with anti-correlated degrees displayed the lowest false positive rates, and consequently had the highest stimulus detection performance. We propose that anti-correlation in the degree distribution may be a computational strategy employed by sensory cortices to increase the detectability of external stimuli. We show that networks with anti-correlated degrees can in principle be formed by applying learning rules comprised of a combination of spike-timing dependent plasticity, homeostatic plasticity and pruning to networks with uncorrelated degrees. To test our prediction we suggest a novel experimental method to estimate correlations in the degree distribution.

  12. A novel analytical characterization for short-term plasticity parameters in spiking neural networks.

    Science.gov (United States)

    O'Brien, Michael J; Thibeault, Corey M; Srinivasa, Narayan

    2014-01-01

    Short-term plasticity (STP) is a phenomenon that widely occurs in the neocortex with implications for learning and memory. Based on a widely used STP model, we develop an analytical characterization of the STP parameter space to determine the nature of each synapse (facilitating, depressing, or both) in a spiking neural network based on presynaptic firing rate and the corresponding STP parameters. We demonstrate consistency with previous work by leveraging the power of our characterization to replicate the functional volumes that are integral for the previous network stabilization results. We then use our characterization to predict the precise transitional point from the facilitating regime to the depressing regime in a simulated synapse, suggesting in vitro experiments to verify the underlying STP model. We conclude the work by integrating our characterization into a framework for finding suitable STP parameters for self-sustaining random, asynchronous activity in a prescribed recurrent spiking neural network. The systematic process resulting from our analytical characterization improves the success rate of finding the requisite parameters for such networks by three orders of magnitude over a random search.

  13. Inhibitory Synaptic Plasticity - Spike timing dependence and putative network function.

    Directory of Open Access Journals (Sweden)

    Tim P Vogels

    2013-07-01

    Full Text Available While the plasticity of excitatory synaptic connections in the brain has been widely studied, the plasticity of inhibitory connections is much less understood. Here, we present recent experimental and theoretical □ndings concerning the rules of spike timing-dependent inhibitory plasticity and their putative network function. This is a summary of a workshop at the COSYNE conference 2012.

  14. An Embedded Multi-Agent Systems Based Industrial Wireless Sensor Network

    Science.gov (United States)

    Brennan, Robert W.

    2017-01-01

    With the emergence of cyber-physical systems, there has been a growing interest in network-connected devices. One of the key requirements of a cyber-physical device is the ability to sense its environment. Wireless sensor networks are a widely-accepted solution for this requirement. In this study, an embedded multi-agent systems-managed wireless sensor network is presented. A novel architecture is proposed, along with a novel wireless sensor network architecture. Active and passive wireless sensor node types are defined, along with their communication protocols, and two application-specific examples are presented. A series of three experiments is conducted to evaluate the performance of the agent-embedded wireless sensor network. PMID:28906452

  15. Stimulus-dependent spiking relationships with the EEG

    Science.gov (United States)

    Snyder, Adam C.

    2015-01-01

    The development and refinement of noninvasive techniques for imaging neural activity is of paramount importance for human neuroscience. Currently, the most accessible and popular technique is electroencephalography (EEG). However, nearly all of what we know about the neural events that underlie EEG signals is based on inference, because of the dearth of studies that have simultaneously paired EEG recordings with direct recordings of single neurons. From the perspective of electrophysiologists there is growing interest in understanding how spiking activity coordinates with large-scale cortical networks. Evidence from recordings at both scales highlights that sensory neurons operate in very distinct states during spontaneous and visually evoked activity, which appear to form extremes in a continuum of coordination in neural networks. We hypothesized that individual neurons have idiosyncratic relationships to large-scale network activity indexed by EEG signals, owing to the neurons' distinct computational roles within the local circuitry. We tested this by recording neuronal populations in visual area V4 of rhesus macaques while we simultaneously recorded EEG. We found substantial heterogeneity in the timing and strength of spike-EEG relationships and that these relationships became more diverse during visual stimulation compared with the spontaneous state. The visual stimulus apparently shifts V4 neurons from a state in which they are relatively uniformly embedded in large-scale network activity to a state in which their distinct roles within the local population are more prominent, suggesting that the specific way in which individual neurons relate to EEG signals may hold clues regarding their computational roles. PMID:26108954

  16. Analog memristive synapse in spiking networks implementing unsupervised learning

    Directory of Open Access Journals (Sweden)

    Erika Covi

    2016-10-01

    Full Text Available Emerging brain-inspired architectures call for devices that can emulate the functionality of biological synapses in order to implement new efficient computational schemes able to solve ill-posed problems. Various devices and solutions are still under investigation and, in this respect, a challenge is opened to the researchers in the field. Indeed, the optimal candidate is a device able to reproduce the complete functionality of a synapse, i.e. the typical synaptic process underlying learning in biological systems (activity-dependent synaptic plasticity. This implies a device able to change its resistance (synaptic strength, or weight upon proper electrical stimuli (synaptic activity and showing several stable resistive states throughout its dynamic range (analog behavior. Moreover, it should be able to perform spike timing dependent plasticity (STDP, an associative homosynaptic plasticity learning rule based on the delay time between the two firing neurons the synapse is connected to. This rule is a fundamental learning protocol in state-of-art networks, because it allows unsupervised learning. Notwithstanding this fact, STDP-based unsupervised learning has been proposed several times mainly for binary synapses rather than multilevel synapses composed of many binary memristors. This paper proposes an HfO2-based analog memristor as a synaptic element which performs STDP within a small spiking neuromorphic network operating unsupervised learning for character recognition. The trained network is able to recognize five characters even in case incomplete or noisy characters are displayed and it is robust to a device-to-device variability of up to +/-30%.

  17. Analog Memristive Synapse in Spiking Networks Implementing Unsupervised Learning.

    Science.gov (United States)

    Covi, Erika; Brivio, Stefano; Serb, Alexander; Prodromakis, Themis; Fanciulli, Marco; Spiga, Sabina

    2016-01-01

    Emerging brain-inspired architectures call for devices that can emulate the functionality of biological synapses in order to implement new efficient computational schemes able to solve ill-posed problems. Various devices and solutions are still under investigation and, in this respect, a challenge is opened to the researchers in the field. Indeed, the optimal candidate is a device able to reproduce the complete functionality of a synapse, i.e., the typical synaptic process underlying learning in biological systems (activity-dependent synaptic plasticity). This implies a device able to change its resistance (synaptic strength, or weight) upon proper electrical stimuli (synaptic activity) and showing several stable resistive states throughout its dynamic range (analog behavior). Moreover, it should be able to perform spike timing dependent plasticity (STDP), an associative homosynaptic plasticity learning rule based on the delay time between the two firing neurons the synapse is connected to. This rule is a fundamental learning protocol in state-of-art networks, because it allows unsupervised learning. Notwithstanding this fact, STDP-based unsupervised learning has been proposed several times mainly for binary synapses rather than multilevel synapses composed of many binary memristors. This paper proposes an HfO 2 -based analog memristor as a synaptic element which performs STDP within a small spiking neuromorphic network operating unsupervised learning for character recognition. The trained network is able to recognize five characters even in case incomplete or noisy images are displayed and it is robust to a device-to-device variability of up to ±30%.

  18. ASSET: Analysis of Sequences of Synchronous Events in Massively Parallel Spike Trains

    Science.gov (United States)

    Canova, Carlos; Denker, Michael; Gerstein, George; Helias, Moritz

    2016-01-01

    With the ability to observe the activity from large numbers of neurons simultaneously using modern recording technologies, the chance to identify sub-networks involved in coordinated processing increases. Sequences of synchronous spike events (SSEs) constitute one type of such coordinated spiking that propagates activity in a temporally precise manner. The synfire chain was proposed as one potential model for such network processing. Previous work introduced a method for visualization of SSEs in massively parallel spike trains, based on an intersection matrix that contains in each entry the degree of overlap of active neurons in two corresponding time bins. Repeated SSEs are reflected in the matrix as diagonal structures of high overlap values. The method as such, however, leaves the task of identifying these diagonal structures to visual inspection rather than to a quantitative analysis. Here we present ASSET (Analysis of Sequences of Synchronous EvenTs), an improved, fully automated method which determines diagonal structures in the intersection matrix by a robust mathematical procedure. The method consists of a sequence of steps that i) assess which entries in the matrix potentially belong to a diagonal structure, ii) cluster these entries into individual diagonal structures and iii) determine the neurons composing the associated SSEs. We employ parallel point processes generated by stochastic simulations as test data to demonstrate the performance of the method under a wide range of realistic scenarios, including different types of non-stationarity of the spiking activity and different correlation structures. Finally, the ability of the method to discover SSEs is demonstrated on complex data from large network simulations with embedded synfire chains. Thus, ASSET represents an effective and efficient tool to analyze massively parallel spike data for temporal sequences of synchronous activity. PMID:27420734

  19. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    Directory of Open Access Journals (Sweden)

    Stojan Jovanović

    2016-06-01

    Full Text Available The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  20. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    Science.gov (United States)

    Jovanović, Stojan; Rotter, Stefan

    2016-06-01

    The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  1. Minimum curvilinearity to enhance topological prediction of protein interactions by network embedding

    KAUST Repository

    Cannistraci, Carlo

    2013-06-21

    Motivation: Most functions within the cell emerge thanks to protein-protein interactions (PPIs), yet experimental determination of PPIs is both expensive and time-consuming. PPI networks present significant levels of noise and incompleteness. Predicting interactions using only PPI-network topology (topological prediction) is difficult but essential when prior biological knowledge is absent or unreliable.Methods: Network embedding emphasizes the relations between network proteins embedded in a low-dimensional space, in which protein pairs that are closer to each other represent good candidate interactions. To achieve network denoising, which boosts prediction performance, we first applied minimum curvilinear embedding (MCE), and then adopted shortest path (SP) in the reduced space to assign likelihood scores to candidate interactions. Furthermore, we introduce (i) a new valid variation of MCE, named non-centred MCE (ncMCE); (ii) two automatic strategies for selecting the appropriate embedding dimension; and (iii) two new randomized procedures for evaluating predictions.Results: We compared our method against several unsupervised and supervisedly tuned embedding approaches and node neighbourhood techniques. Despite its computational simplicity, ncMCE-SP was the overall leader, outperforming the current methods in topological link prediction.Conclusion: Minimum curvilinearity is a valuable non-linear framework that we successfully applied to the embedding of protein networks for the unsupervised prediction of novel PPIs. The rationale for our approach is that biological and evolutionary information is imprinted in the non-linear patterns hidden behind the protein network topology, and can be exploited for predicting new protein links. The predicted PPIs represent good candidates for testing in high-throughput experiments or for exploitation in systems biology tools such as those used for network-based inference and prediction of disease-related functional modules. The

  2. Network-Embedded Management and Applications Understanding Programmable Networking Infrastructure

    CERN Document Server

    Wolter, Ralf

    2013-01-01

    Despite the explosion of networking services and applications in the past decades, the basic technological underpinnings of the Internet have remained largely unchanged. At its heart are special-purpose appliances that connect us to the digital world, commonly known as switches and routers. Now, however, the traditional framework is being increasingly challenged by new methods that are jostling for a position in the next-generation Internet. The concept of a network that is becoming more programmable is one of the aspects that are taking center stage. This opens new possibilities to embed software applications inside the network itself and to manage networks and communications services with unprecedented ease and efficiency. In this edited volume, distinguished experts take the reader on a tour of different facets of programmable network infrastructure and application exploit it. Presenting the state of the art in network embedded management and applications and programmable network infrastructure, the book c...

  3. Co-design for an SoC embedded network controller

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    With the development of Ethernet systems and the growing capacity of modern silicon technology, embedded communication networks are playing an increasingly important role in embedded and safety critical systems. Hardware/software co-design is a methodology for solving design problems in processor based embedded systems. In this work, we implemented a new 1-cycle pipeline microprocessor and a fast Ethernet transceiver and established a low cost, high performance embedded network controller, and designed a TCP/IP stack to access the Internet. We discussed the hardware/software architecture in the forepart, and then the whole system-on-a-chip on Altera Stratix EP1S25F780C6 device. Using the FPGA environment and SmartBit tester, we tested the system's throughput. Our simulation results showed that the maximum throughput of Ethernet packets is up to 7 Mbps, that of UDP packets is up to 5.8 Mbps, and that of TCP packets is up to 3.4 Mbps, which showed that this embedded system can easily transmit basic voice and video signals through Ethernet, and that using only one chip can realize that many electronic devices access to the Internet directly and get high performance.

  4. STDP-based spiking deep convolutional neural networks for object recognition.

    Science.gov (United States)

    Kheradpisheh, Saeed Reza; Ganjtabesh, Mohammad; Thorpe, Simon J; Masquelier, Timothée

    2018-03-01

    Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware

  5. Efficient computation in networks of spiking neurons: simulations and theory

    International Nuclear Information System (INIS)

    Natschlaeger, T.

    1999-01-01

    One of the most prominent features of biological neural systems is that individual neurons communicate via short electrical pulses, the so called action potentials or spikes. In this thesis we investigate possible mechanisms which can in principle explain how complex computations in spiking neural networks (SNN) can be performed very fast, i.e. within a few 10 milliseconds. Some of these models are based on the assumption that relevant information is encoded by the timing of individual spikes (temporal coding). We will also discuss a model which is based on a population code and still is able to perform fast complex computations. In their natural environment biological neural systems have to process signals with a rich temporal structure. Hence it is an interesting question how neural systems process time series. In this context we explore possible links between biophysical characteristics of single neurons (refractory behavior, connectivity, time course of postsynaptic potentials) and synapses (unreliability, dynamics) on the one hand and possible computations on times series on the other hand. Furthermore we describe a general model of computation that exploits dynamic synapses. This model provides a general framework for understanding how neural systems process time-varying signals. (author)

  6. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    International Nuclear Information System (INIS)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-01-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing

  7. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin, E-mail: xmli@cqu.edu.cn [Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044 (China); College of Automation, Chongqing University, Chongqing 400044 (China)

    2015-11-15

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  8. A spiking network model of cerebellar Purkinje cells and molecular layer interneurons exhibiting irregular firing

    Directory of Open Access Journals (Sweden)

    William eLennon

    2014-12-01

    Full Text Available While the anatomy of the cerebellar microcircuit is well studied, how it implements cerebellar function is not understood. A number of models have been proposed to describe this mechanism but few emphasize the role of the vast network Purkinje cells (PKJs form with the molecular layer interneurons (MLIs – the stellate and basket cells. We propose a model of the MLI-PKJ network composed of simple spiking neurons incorporating the major anatomical and physiological features. In computer simulations, the model reproduces the irregular firing patterns observed in PKJs and MLIs in vitro and a shift toward faster, more regular firing patterns when inhibitory synaptic currents are blocked. In the model, the time between PKJ spikes is shown to be proportional to the amount of feedforward inhibition from an MLI on average. The two key elements of the model are: (1 spontaneously active PKJs and MLIs due to an endogenous depolarizing current, and (2 adherence to known anatomical connectivity along a parasagittal strip of cerebellar cortex. We propose this model to extend previous spiking network models of the cerebellum and for further computational investigation into the role of irregular firing and MLIs in cerebellar learning and function.

  9. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.

    Science.gov (United States)

    Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii

    2017-01-01

    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  10. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

    Directory of Open Access Journals (Sweden)

    Bodo Rueckauer

    2017-12-01

    Full Text Available Spiking neural networks (SNNs can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  11. Spiking Neurons for Analysis of Patterns

    Science.gov (United States)

    Huntsberger, Terrance

    2008-01-01

    Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological

  12. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    Science.gov (United States)

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  13. Multiscale Embedded Gene Co-expression Network Analysis.

    Directory of Open Access Journals (Sweden)

    Won-Min Song

    2015-11-01

    Full Text Available Gene co-expression network analysis has been shown effective in identifying functional co-expressed gene modules associated with complex human diseases. However, existing techniques to construct co-expression networks require some critical prior information such as predefined number of clusters, numerical thresholds for defining co-expression/interaction, or do not naturally reproduce the hallmarks of complex systems such as the scale-free degree distribution of small-worldness. Previously, a graph filtering technique called Planar Maximally Filtered Graph (PMFG has been applied to many real-world data sets such as financial stock prices and gene expression to extract meaningful and relevant interactions. However, PMFG is not suitable for large-scale genomic data due to several drawbacks, such as the high computation complexity O(|V|3, the presence of false-positives due to the maximal planarity constraint, and the inadequacy of the clustering framework. Here, we developed a new co-expression network analysis framework called Multiscale Embedded Gene Co-expression Network Analysis (MEGENA by: i introducing quality control of co-expression similarities, ii parallelizing embedded network construction, and iii developing a novel clustering technique to identify multi-scale clustering structures in Planar Filtered Networks (PFNs. We applied MEGENA to a series of simulated data and the gene expression data in breast carcinoma and lung adenocarcinoma from The Cancer Genome Atlas (TCGA. MEGENA showed improved performance over well-established clustering methods and co-expression network construction approaches. MEGENA revealed not only meaningful multi-scale organizations of co-expressed gene clusters but also novel targets in breast carcinoma and lung adenocarcinoma.

  14. Multiscale Embedded Gene Co-expression Network Analysis.

    Science.gov (United States)

    Song, Won-Min; Zhang, Bin

    2015-11-01

    Gene co-expression network analysis has been shown effective in identifying functional co-expressed gene modules associated with complex human diseases. However, existing techniques to construct co-expression networks require some critical prior information such as predefined number of clusters, numerical thresholds for defining co-expression/interaction, or do not naturally reproduce the hallmarks of complex systems such as the scale-free degree distribution of small-worldness. Previously, a graph filtering technique called Planar Maximally Filtered Graph (PMFG) has been applied to many real-world data sets such as financial stock prices and gene expression to extract meaningful and relevant interactions. However, PMFG is not suitable for large-scale genomic data due to several drawbacks, such as the high computation complexity O(|V|3), the presence of false-positives due to the maximal planarity constraint, and the inadequacy of the clustering framework. Here, we developed a new co-expression network analysis framework called Multiscale Embedded Gene Co-expression Network Analysis (MEGENA) by: i) introducing quality control of co-expression similarities, ii) parallelizing embedded network construction, and iii) developing a novel clustering technique to identify multi-scale clustering structures in Planar Filtered Networks (PFNs). We applied MEGENA to a series of simulated data and the gene expression data in breast carcinoma and lung adenocarcinoma from The Cancer Genome Atlas (TCGA). MEGENA showed improved performance over well-established clustering methods and co-expression network construction approaches. MEGENA revealed not only meaningful multi-scale organizations of co-expressed gene clusters but also novel targets in breast carcinoma and lung adenocarcinoma.

  15. Application of cross-correlated delay shift rule in spiking neural networks for interictal spike detection.

    Science.gov (United States)

    Lilin Guo; Zhenzhong Wang; Cabrerizo, Mercedes; Adjouadi, Malek

    2016-08-01

    This study proposes a Cross-Correlated Delay Shift (CCDS) supervised learning rule to train neurons with associated spatiotemporal patterns to classify spike patterns. The objective of this study was to evaluate the feasibility of using the CCDS rule to automate the detection of interictal spikes in electroencephalogram (EEG) data on patients with epilepsy. Encoding is the initial yet essential step for spiking neurons to process EEG patterns. A new encoding method is utilized to convert the EEG signal into spike patterns. The simulation results show that the proposed algorithm identified 69 spikes out of 82 spikes, or 84% detection rate, which is quite high considering the subtleties of interictal spikes and the tediousness of monitoring long EEG records. This CCDS rule is also benchmarked by ReSuMe on the same task.

  16. The effect of increasing levels of embedded generation on the distribution network. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Collinson, A; Earp, G K; Howson, D; Owen, R D; Wright, A J

    1999-10-01

    This report was commissioned as part of the EA Technology Strategic Technology Programme under guidance of the Module 5 (Embedded Generation) Steering Group. This report aims to provide information related to the distribution and supply of electricity in the context of increasing levels of embedded generation. There is a brief description of the operating environment within which electricity companies in the UK must operate. Technical issues related to the connection of generation to the existing distribution infrastructure are highlighted and the design philosophy adopted by network designers in accommodating applications for the connection of embedded generation to the network is discussed. The effects embedded generation has on the network and the issues raised are presented as many of them present barriers to the connection of embedded generators. The final chapters cover the forecast of required connection to 2010 and solutions to restrictions preventing the connection of more embedded generation to the network. (author)

  17. Detection of M-Sequences from Spike Sequence in Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Yoshi Nishitani

    2012-01-01

    Full Text Available In circuit theory, it is well known that a linear feedback shift register (LFSR circuit generates pseudorandom bit sequences (PRBS, including an M-sequence with the maximum period of length. In this study, we tried to detect M-sequences known as a pseudorandom sequence generated by the LFSR circuit from time series patterns of stimulated action potentials. Stimulated action potentials were recorded from dissociated cultures of hippocampal neurons grown on a multielectrode array. We could find several M-sequences from a 3-stage LFSR circuit (M3. These results show the possibility of assembling LFSR circuits or its equivalent ones in a neuronal network. However, since the M3 pattern was composed of only four spike intervals, the possibility of an accidental detection was not zero. Then, we detected M-sequences from random spike sequences which were not generated from an LFSR circuit and compare the result with the number of M-sequences from the originally observed raster data. As a result, a significant difference was confirmed: a greater number of “0–1” reversed the 3-stage M-sequences occurred than would have accidentally be detected. This result suggests that some LFSR equivalent circuits are assembled in neuronal networks.

  18. Entrepreneurial networks as culturally embedded phenomena

    Directory of Open Access Journals (Sweden)

    Vlatka Skokic

    2015-06-01

    Full Text Available Entrepreneurship research concerning networks has largely focused on network structure, content and governance. We believe that further research is required in order to gain a richer understanding of why specific network forms and types originated. The purpose of this paper is to explore the existence, importance, values and meanings of both the informal and formal networks and networking behaviours of small-scale hotel owner-managers embedded in the socio-economic context of Croatia. In order to gain richer and more detailed understanding of entrepreneurial networks and to understand the influence of socio-economic setting on entrepreneurial networking, we have employed qualitative, in-depth study with small hotel owners. Results suggest that entrepreneurs do not establish strong personal and firm-to-firm ties, but rather focus on formal associations. Reported findings identify socio-cultural factors apparently unique to the context of former socialist economy which have the potential to explain the reported networking behaviour. The adopted research approach brings another dimension to existing theoretical underpinnings, which can encourage researchers to extend or revise theories with new contextual variables.

  19. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.

    Science.gov (United States)

    Gilra, Aditya; Gerstner, Wulfram

    2017-11-27

    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.

  20. Formal Specification Based Automatic Test Generation for Embedded Network Systems

    Directory of Open Access Journals (Sweden)

    Eun Hye Choi

    2014-01-01

    Full Text Available Embedded systems have become increasingly connected and communicate with each other, forming large-scaled and complicated network systems. To make their design and testing more reliable and robust, this paper proposes a formal specification language called SENS and a SENS-based automatic test generation tool called TGSENS. Our approach is summarized as follows: (1 A user describes requirements of target embedded network systems by logical property-based constraints using SENS. (2 Given SENS specifications, test cases are automatically generated using a SAT-based solver. Filtering mechanisms to select efficient test cases are also available in our tool. (3 In addition, given a testing goal by the user, test sequences are automatically extracted from exhaustive test cases. We’ve implemented our approach and conducted several experiments on practical case studies. Through the experiments, we confirmed the efficiency of our approach in design and test generation of real embedded air-conditioning network systems.

  1. An Efficient Hardware Circuit for Spike Sorting Based on Competitive Learning Networks

    Directory of Open Access Journals (Sweden)

    Huan-Yuan Chen

    2017-09-01

    Full Text Available This study aims to present an effective VLSI circuit for multi-channel spike sorting. The circuit supports the spike detection, feature extraction and classification operations. The detection circuit is implemented in accordance with the nonlinear energy operator algorithm. Both the peak detection and area computation operations are adopted for the realization of the hardware architecture for feature extraction. The resulting feature vectors are classified by a circuit for competitive learning (CL neural networks. The CL circuit supports both online training and classification. In the proposed architecture, all the channels share the same detection, feature extraction, learning and classification circuits for a low area cost hardware implementation. The clock-gating technique is also employed for reducing the power dissipation. To evaluate the performance of the architecture, an application-specific integrated circuit (ASIC implementation is presented. Experimental results demonstrate that the proposed circuit exhibits the advantages of a low chip area, a low power dissipation and a high classification success rate for spike sorting.

  2. An Efficient Hardware Circuit for Spike Sorting Based on Competitive Learning Networks

    Science.gov (United States)

    Chen, Huan-Yuan; Chen, Chih-Chang

    2017-01-01

    This study aims to present an effective VLSI circuit for multi-channel spike sorting. The circuit supports the spike detection, feature extraction and classification operations. The detection circuit is implemented in accordance with the nonlinear energy operator algorithm. Both the peak detection and area computation operations are adopted for the realization of the hardware architecture for feature extraction. The resulting feature vectors are classified by a circuit for competitive learning (CL) neural networks. The CL circuit supports both online training and classification. In the proposed architecture, all the channels share the same detection, feature extraction, learning and classification circuits for a low area cost hardware implementation. The clock-gating technique is also employed for reducing the power dissipation. To evaluate the performance of the architecture, an application-specific integrated circuit (ASIC) implementation is presented. Experimental results demonstrate that the proposed circuit exhibits the advantages of a low chip area, a low power dissipation and a high classification success rate for spike sorting. PMID:28956859

  3. Simple networks for spike-timing-based computation, with application to olfactory processing.

    Science.gov (United States)

    Brody, Carlos D; Hopfield, J J

    2003-03-06

    Spike synchronization across neurons can be selective for the situation where neurons are driven at similar firing rates, a "many are equal" computation. This can be achieved in the absence of synaptic interactions between neurons, through phase locking to a common underlying oscillatory potential. Based on this principle, we instantiate an algorithm for robust odor recognition into a model network of spiking neurons whose main features are taken from known properties of biological olfactory systems. Here, recognition of odors is signaled by spike synchronization of specific subsets of "mitral cells." This synchronization is highly odor selective and invariant to a wide range of odor concentrations. It is also robust to the presence of strong distractor odors, thus allowing odor segmentation within complex olfactory scenes. Information about odors is encoded in both the identity of glomeruli activated above threshold (1 bit of information per glomerulus) and in the analog degree of activation of the glomeruli (approximately 3 bits per glomerulus).

  4. Memristors Empower Spiking Neurons With Stochasticity

    KAUST Repository

    Al-Shedivat, Maruan

    2015-06-01

    Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning algorithms. However, such systems must have two crucial features: 1) the neurons should follow a specific behavioral model, and 2) stochastic spiking should be implemented efficiently for it to be scalable. This paper proposes a memristor-based stochastically spiking neuron that fulfills these requirements. First, the analytical model of the memristor is enhanced so it can capture the behavioral stochasticity consistent with experimentally observed phenomena. The switching behavior of the memristor model is demonstrated to be akin to the firing of the stochastic spike response neuron model, the primary building block for probabilistic algorithms in spiking neural networks. Furthermore, the paper proposes a neural soma circuit that utilizes the intrinsic nondeterminism of memristive switching for efficient spike generation. The simulations and analysis of the behavior of a single stochastic neuron and a winner-take-all network built of such neurons and trained on handwritten digits confirm that the circuit can be used for building probabilistic sampling and pattern adaptation machinery in spiking networks. The findings constitute an important step towards scalable and efficient probabilistic neuromorphic platforms. © 2011 IEEE.

  5. Brian: a simulator for spiking neural networks in Python

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2008-11-01

    Full Text Available Brian is a new simulator for spiking neural networks, written in Python (http://brian.di.ens.fr. It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  6. Brian: a simulator for spiking neural networks in python.

    Science.gov (United States)

    Goodman, Dan; Brette, Romain

    2008-01-01

    "Brian" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  7. Pricing of embedded generation: Incorporation of externalities and avoided network losses

    International Nuclear Information System (INIS)

    Rodrigo, Asanka S.; Wijayatunga, Priyantha D.C.

    2007-01-01

    Traditionally, the electricity purchase tariff of embedded generators reflected only the cost of production and delivery of electricity to the consumers, which includes the costs of labor, capital, operation, taxes and insurance. However, the production of electricity causes adverse impacts on the environment. At present, this issue has not been widely addressed by the existing pricing methodologies. This paper proposes a pricing methodology for renewable energy based embedded electricity generation, incorporating the cost of externalities with a case study on the Sri Lanka power system. It recommends that the embedded generation tariff be based on the principle of 'avoided cost', considering the cost of energy production, cost of externalities and the cost of network losses. While the 'impact path way' approach is proposed for calculation of the cost of externalities of energy, the nodal-based cost calculation is proposed for the avoided cost of network losses calculation. The pricing methodology proposed in the paper provides important information for investors when choosing the most economical site for their development. It can also be used to optimize the network use. These will allow the developers of embedded generation facilities and the utilities operating the national grid to maximize the potential of embedded generation. (author)

  8. On the sample complexity of learning for networks of spiking neurons with nonlinear synaptic interactions.

    Science.gov (United States)

    Schmitt, Michael

    2004-09-01

    We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.

  9. The Hidden Flow Structure and Metric Space of Network Embedding Algorithms Based on Random Walks.

    Science.gov (United States)

    Gu, Weiwei; Gong, Li; Lou, Xiaodan; Zhang, Jiang

    2017-10-13

    Network embedding which encodes all vertices in a network as a set of numerical vectors in accordance with it's local and global structures, has drawn widespread attention. Network embedding not only learns significant features of a network, such as the clustering and linking prediction but also learns the latent vector representation of the nodes which provides theoretical support for a variety of applications, such as visualization, link prediction, node classification, and recommendation. As the latest progress of the research, several algorithms based on random walks have been devised. Although those algorithms have drawn much attention for their high scores in learning efficiency and accuracy, there is still a lack of theoretical explanation, and the transparency of those algorithms has been doubted. Here, we propose an approach based on the open-flow network model to reveal the underlying flow structure and its hidden metric space of different random walk strategies on networks. We show that the essence of embedding based on random walks is the latent metric structure defined on the open-flow network. This not only deepens our understanding of random- walk-based embedding algorithms but also helps in finding new potential applications in network embedding.

  10. Design of Spiking Central Pattern Generators for Multiple Locomotion Gaits in Hexapod Robots by Christiansen Grammar Evolution.

    Science.gov (United States)

    Espinal, Andres; Rostro-Gonzalez, Horacio; Carpio, Martin; Guerra-Hernandez, Erick I; Ornelas-Rodriguez, Manuel; Sotelo-Figueroa, Marco

    2016-01-01

    This paper presents a method to design Spiking Central Pattern Generators (SCPGs) to achieve locomotion at different frequencies on legged robots. It is validated through embedding its designs into a Field-Programmable Gate Array (FPGA) and implemented on a real hexapod robot. The SCPGs are automatically designed by means of a Christiansen Grammar Evolution (CGE)-based methodology. The CGE performs a solution for the configuration (synaptic weights and connections) for each neuron in the SCPG. This is carried out through the indirect representation of candidate solutions that evolve to replicate a specific spike train according to a locomotion pattern (gait) by measuring the similarity between the spike trains and the SPIKE distance to lead the search to a correct configuration. By using this evolutionary approach, several SCPG design specifications can be explicitly added into the SPIKE distance-based fitness function, such as looking for Spiking Neural Networks (SNNs) with minimal connectivity or a Central Pattern Generator (CPG) able to generate different locomotion gaits only by changing the initial input stimuli. The SCPG designs have been successfully implemented on a Spartan 6 FPGA board and a real time validation on a 12 Degrees Of Freedom (DOFs) hexapod robot is presented.

  11. Embedding multiple self-organisation functionalities in future radio access networks

    NARCIS (Netherlands)

    Jansen, T.; Amirijoo, M.; Türke, U.; Jorguseski, L.; Zetterberg, K.; Nascimento, J.R.V. do; Schmelz, L.C.; Turk, J.; Balan, I.

    2009-01-01

    Wireless network operators today allocate considerable manual effort in managing their networks. A viable solution for lowering the manual effort is to introduce self-organisation functionalities. In this paper we discuss the challenges that are encountered when embedding multiple self-organisation

  12. Population spikes in cortical networks during different functional states.

    Directory of Open Access Journals (Sweden)

    Shirley eMark

    2012-07-01

    Full Text Available Brain computational challenges vary between behavioral states. Engaged animals react according to incoming sensory information, while in relaxed and sleeping states consolidation of the learned information is believed to take place. Different states are characterized by different forms of cortical activity. We study a possible neuronal mechanism for generating these diverse dynamics and suggest their possible functional significance. Previous studies demonstrated that brief synchronized increase in a neural firing (Population Spikes can be generated in homogenous recurrent neural networks with short-term synaptic depression. Here we consider more realistic networks with clustered architecture. We show that the level of synchronization in neural activity can be controlled smoothly by network parameters. The network shifts from asynchronous activity to a regime in which clusters synchronized separately, then, the synchronization between the clusters increases gradually to fully synchronized state. We examine the effects of different synchrony levels on the transmission of information by the network. We find that the regime of intermediate synchronization is preferential for the flow of information between sparsely connected areas. Based on these results, we suggest that the regime of intermediate synchronization corresponds to engaged behavioral state of the animal, while global synchronization is exhibited during relaxed and sleeping states.

  13. Spiking neural networks on high performance computer clusters

    Science.gov (United States)

    Chen, Chong; Taha, Tarek M.

    2011-09-01

    In this paper we examine the acceleration of two spiking neural network models on three clusters of multicore processors representing three categories of processors: x86, STI Cell, and NVIDIA GPGPUs. The x86 cluster utilized consists of 352 dualcore AMD Opterons, the Cell cluster consists of 320 Sony Playstation 3s, while the GPGPU cluster contains 32 NVIDIA Tesla S1070 systems. The results indicate that the GPGPU platform can dominate in performance compared to the Cell and x86 platforms examined. From a cost perspective, the GPGPU is more expensive in terms of neuron/s throughput. If the cost of GPGPUs go down in the future, this platform will become very cost effective for these models.

  14. Linear stability analysis of retrieval state in associative memory neural networks of spiking neurons

    International Nuclear Information System (INIS)

    Yoshioka, Masahiko

    2002-01-01

    We study associative memory neural networks of the Hodgkin-Huxley type of spiking neurons in which multiple periodic spatiotemporal patterns of spike timing are memorized as limit-cycle-type attractors. In encoding the spatiotemporal patterns, we assume the spike-timing-dependent synaptic plasticity with the asymmetric time window. Analysis for periodic solution of retrieval state reveals that if the area of the negative part of the time window is equivalent to the positive part, then crosstalk among encoded patterns vanishes. Phase transition due to the loss of the stability of periodic solution is observed when we assume fast α function for direct interaction among neurons. In order to evaluate the critical point of this phase transition, we employ Floquet theory in which the stability problem of the infinite number of spiking neurons interacting with α function is reduced to the eigenvalue problem with the finite size of matrix. Numerical integration of the single-body dynamics yields the explicit value of the matrix, which enables us to determine the critical point of the phase transition with a high degree of precision

  15. Span: spike pattern association neuron for learning spatio-temporal spike patterns.

    Science.gov (United States)

    Mohemmed, Ammar; Schliebs, Stefan; Matsuda, Satoshi; Kasabov, Nikola

    2012-08-01

    Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN - a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes. The idea of the proposed algorithm is to transform spike trains during the learning phase into analog signals so that common mathematical operations can be performed on them. Using this conversion, it is possible to apply the well-known Widrow-Hoff rule directly to the transformed spike trains in order to adjust the synaptic weights and to achieve a desired input/output spike behavior of the neuron. In the presented experimental analysis, the proposed learning algorithm is evaluated regarding its learning capabilities, its memory capacity, its robustness to noisy stimuli and its classification performance. Differences and similarities of SPAN regarding two related algorithms, ReSuMe and Chronotron, are discussed.

  16. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity.

    Science.gov (United States)

    Srinivasa, Narayan; Cho, Youngkwan

    2014-01-01

    A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns-both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.

  17. Impact of embedded renewable on transmission and distribution network

    International Nuclear Information System (INIS)

    Pistora, M.; Maslo, K.

    2012-01-01

    This paper deals with impact of renewable energy sources on both interconnected transmission systems and distribution networks. It evaluates the role of phase-shifting transformers in controlling active power flows created by renewable as well as embedded renewable' role in is landing operation in distribution network. Model of photovoltaic power plant from MODES simulation software is described as well. (Authors)

  18. Impact of sub and supra-threshold adaptation currents in networks of spiking neurons.

    Science.gov (United States)

    Colliaux, David; Yger, Pierre; Kaneko, Kunihiko

    2015-12-01

    Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks.

  19. Brand communities embedded in social networks.

    Science.gov (United States)

    Zaglia, Melanie E

    2013-02-01

    Brand communities represent highly valuable marketing, innovation management, and customer relationship management tools. However, applying successful marketing strategies today, and in the future, also means exploring and seizing the unprecedented opportunities of social network environments. This study combines these two social phenomena which have largely been researched separately, and aims to investigate the existence, functionality and different types of brand communities within social networks. The netnographic approach yields strong evidence of this existence; leading to a better understanding of such embedded brand communities, their peculiarities, and motivational drivers for participation; therefore the findings contribute to theory by combining two separate research streams. Due to the advantages of social networks, brand management is now able to implement brand communities with less time and financial effort; however, choosing the appropriate brand community type, cultivating consumers' interaction, and staying tuned to this social engagement are critical factors to gain anticipated brand outcomes.

  20. ZnO core spike particles and nano-networks and their wide range of applications

    Science.gov (United States)

    Wille, S.; Mishra, Y. K.; Gedamu, D.; Kaps, S.; Jin, X.; Koschine, T.; Bathnagar, A.; Adelung, R.

    2011-05-01

    In our approach we are producing a polymer composite material with ZnO core spike particles as concave fillers. The core spike particles are synthesized by a high throughput method. Using PDMS (Polydimethylsiloxane) as a matrix material the core spike particles achieve not only a high mechanical reinforcement but also influence other material properties in a very interesting way, making such a composite very interesting for a wide range of applications. In a very similar synthesis route a nanoscopic ZnO-network is produced. As a ceramic this network can withstand high temperatures like 1300 K. In addition this material is quite elastic. To find a material with these two properties is a really difficult task, as polymers tend to decompose already at lower temperatures and metals melt. Especially under ambient conditions, often oxygen creates a problem for metals at these temperatures. If this material is at the same time a semiconductor, it has a high potential as a multifunctional material. Ceramic or classical semiconductors like III-V or IIVI type are high temperature stable, but typically brittle. This is different on the nanoscale. Even semiconductor wires like silicon with a very small diameter do not easily built up enough stress that leads to a failure while being bent, because in a first order approximation the maximum stress of a fiber scales with its diameter.

  1. Anti-correlations in the degree distribution increase stimulus detection performance in noisy spiking neural networks

    NARCIS (Netherlands)

    Martens, M.B. (Marijn B.); A.R. Houweling (Arthur); E. Tiesinga, P.H. (Paul H.)

    2017-01-01

    textabstractNeuronal circuits in the rodent barrel cortex are characterized by stable low firing rates. However, recent experiments show that short spike trains elicited by electrical stimulation in single neurons can induce behavioral responses. Hence, the underlying neural networks provide

  2. Silicon-embedded copper nanostructure network for high energy storage

    Science.gov (United States)

    Yu, Tianyue

    2016-03-15

    Provided herein are nanostructure networks having high energy storage, electrochemically active electrode materials including nanostructure networks having high energy storage, as well as electrodes and batteries including the nanostructure networks having high energy storage. According to various implementations, the nanostructure networks have high energy density as well as long cycle life. In some implementations, the nanostructure networks include a conductive network embedded with electrochemically active material. In some implementations, silicon is used as the electrochemically active material. The conductive network may be a metal network such as a copper nanostructure network. Methods of manufacturing the nanostructure networks and electrodes are provided. In some implementations, metal nanostructures can be synthesized in a solution that contains silicon powder to make a composite network structure that contains both. The metal nanostructure growth can nucleate in solution and on silicon nanostructure surfaces.

  3. Silicon-embedded copper nanostructure network for high energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Tianyue

    2018-01-23

    Provided herein are nanostructure networks having high energy storage, electrochemically active electrode materials including nanostructure networks having high energy storage, as well as electrodes and batteries including the nanostructure networks having high energy storage. According to various implementations, the nanostructure networks have high energy density as well as long cycle life. In some implementations, the nanostructure networks include a conductive network embedded with electrochemically active material. In some implementations, silicon is used as the electrochemically active material. The conductive network may be a metal network such as a copper nanostructure network. Methods of manufacturing the nanostructure networks and electrodes are provided. In some implementations, metal nanostructures can be synthesized in a solution that contains silicon powder to make a composite network structure that contains both. The metal nanostructure growth can nucleate in solution and on silicon nanostructure surfaces.

  4. Synaptic energy drives the information processing mechanisms in spiking neural networks.

    Science.gov (United States)

    El Laithy, Karim; Bogdan, Martin

    2014-04-01

    Flow of energy and free energy minimization underpins almost every aspect of naturally occurring physical mechanisms. Inspired by this fact this work establishes an energy-based framework that spans the multi-scale range of biological neural systems and integrates synaptic dynamic, synchronous spiking activity and neural states into one consistent working paradigm. Following a bottom-up approach, a hypothetical energy function is proposed for dynamic synaptic models based on the theoretical thermodynamic principles and the Hopfield networks. We show that a synapse exposes stable operating points in terms of its excitatory postsynaptic potential as a function of its synaptic strength. We postulate that synapses in a network operating at these stable points can drive this network to an internal state of synchronous firing. The presented analysis is related to the widely investigated temporal coherent activities (cell assemblies) over a certain range of time scales (binding-by-synchrony). This introduces a novel explanation of the observed (poly)synchronous activities within networks regarding the synaptic (coupling) functionality. On a network level the transitions from one firing scheme to the other express discrete sets of neural states. The neural states exist as long as the network sustains the internal synaptic energy.

  5. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    Directory of Open Access Journals (Sweden)

    Wilten eNicola

    2016-02-01

    Full Text Available A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF. The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks

  6. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    Science.gov (United States)

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  7. A distributed framework for inter-domain virtual network embedding

    Science.gov (United States)

    Wang, Zihua; Han, Yanni; Lin, Tao; Tang, Hui

    2013-03-01

    Network virtualization has been a promising technology for overcoming the Internet impasse. A main challenge in network virtualization is the efficient assignment of virtual resources. Existing work focused on intra-domain solutions whereas inter-domain situation is more practical in realistic setting. In this paper, we present a distributed inter-domain framework for mapping virtual networks to physical networks which can ameliorate the performance of the virtual network embedding. The distributed framework is based on a Multi-agent approach. A set of messages for information exchange is defined. We design different operations and IPTV use scenarios to validate the advantages of our framework. Use cases shows that our framework can solve the inter-domain problem efficiently.

  8. Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs.

    Directory of Open Access Journals (Sweden)

    Robert R Kerr

    Full Text Available Learning rules, such as spike-timing-dependent plasticity (STDP, change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.

  9. Automatic EEG spike detection.

    Science.gov (United States)

    Harner, Richard

    2009-10-01

    Since the 1970s advances in science and technology during each succeeding decade have renewed the expectation of efficient, reliable automatic epileptiform spike detection (AESD). But even when reinforced with better, faster tools, clinically reliable unsupervised spike detection remains beyond our reach. Expert-selected spike parameters were the first and still most widely used for AESD. Thresholds for amplitude, duration, sharpness, rise-time, fall-time, after-coming slow waves, background frequency, and more have been used. It is still unclear which of these wave parameters are essential, beyond peak-peak amplitude and duration. Wavelet parameters are very appropriate to AESD but need to be combined with other parameters to achieve desired levels of spike detection efficiency. Artificial Neural Network (ANN) and expert-system methods may have reached peak efficiency. Support Vector Machine (SVM) technology focuses on outliers rather than centroids of spike and nonspike data clusters and should improve AESD efficiency. An exemplary spike/nonspike database is suggested as a tool for assessing parameters and methods for AESD and is available in CSV or Matlab formats from the author at brainvue@gmail.com. Exploratory Data Analysis (EDA) is presented as a graphic method for finding better spike parameters and for the step-wise evaluation of the spike detection process.

  10. Getting Embedded in Industry Networks Abroad

    DEFF Research Database (Denmark)

    Gretzinger, Susanne; Dyhr Ulrich, Anna Marie; Hollensen, Svend

    (Hollensen, 2017). This research project aims to explain how the “incubator” can support the acquisition of lacking experiental knowledge and facilitate the process of getting embedded in markets or industrial networks abroad. Theoretical foundation This paper adopts an industrial network approach in line......Expanding to markets abroad is a vital opportunity to stabilize and/or expand a company’s revenues. Companies who suffer from small domestic markets try to overcome competitive pressures by entering new markets (Korsakine and Tvaronaviciene, 2012). In particular, the attention for SMEs to increase...... their activities abroad has turned to the potential of accessing emerging markets with faster growth like the BRIC countries (Brazil, Russia, India and China) in response to the saturation of domestic and European markets (Neupert, et al. 2006, Ulrich, et al., 2014). However, the challenges associated...

  11. Studying the mechanisms of the Somatic Marker Hypothesis in Spiking Neural Networks (SNN

    Directory of Open Access Journals (Sweden)

    Manuel GONZÁLEZ

    2013-07-01

    Full Text Available Normal 0 21 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In this paper, a mechanism of emotional bias in decision making is studied using Spiking Neural Networks to simulate the associative and recurrent networks involved. The results obtained are along the lines of those proposed by A. Damasio as part of the Somatic Marker Hypothesis, in particular, that, in absence of emotional input, the decision making is driven by the rational input alone. Appropriate representations for the Objective and Emotional Values are also suggested, provided a spike representation (code of the information.

  12. Studying the mechanisms of the Somatic Marker Hypothesis in Spiking Neural Networks (SNN

    Directory of Open Access Journals (Sweden)

    Alejandro JIMÉNEZ-RODRÍGUEZ

    2012-09-01

    Full Text Available Normal 0 21 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In this paper, a mechanism of emotional bias in decision making is studied using Spiking Neural Networks to simulate the associative and recurrent networks involved. The results obtained are along the lines of those proposed by A. Damasio as part of the Somatic Marker Hypothesis, in particular, that, in absence of emotional input, the decision making is driven by the rational input alone. Appropriate representations for the Objective and Emotional Values are also suggested, provided a spike representation (code of the information.

  13. Spike-based population coding and working memory.

    Directory of Open Access Journals (Sweden)

    Martin Boerlin

    2011-02-01

    Full Text Available Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.

  14. Optimizing Semantic Pointer Representations for Symbol-Like Processing in Spiking Neural Networks.

    Science.gov (United States)

    Gosmann, Jan; Eliasmith, Chris

    2016-01-01

    The Semantic Pointer Architecture (SPA) is a proposal of specifying the computations and architectural elements needed to account for cognitive functions. By means of the Neural Engineering Framework (NEF) this proposal can be realized in a spiking neural network. However, in any such network each SPA transformation will accumulate noise. By increasing the accuracy of common SPA operations, the overall network performance can be increased considerably. As well, the representations in such networks present a trade-off between being able to represent all possible values and being only able to represent the most likely values, but with high accuracy. We derive a heuristic to find the near-optimal point in this trade-off. This allows us to improve the accuracy of common SPA operations by up to 25 times. Ultimately, it allows for a reduction of neuron number and a more efficient use of both traditional and neuromorphic hardware, which we demonstrate here.

  15. Embedded generation and network management issues

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    This report focuses on the characteristics of power generators that are important to accommodation in a distribution system. Part 1 examines the differences between transmission and distribution systems, and issues such as randomness, diversity, predictability, and controllability associated with accommodation in a distribution system. Part 2 concentrates on technical and operational issues relating to embedded generation, and the possible impact of the New Electricity Trading Arrangements. Commercial issues, contractual relationships for network charging and provision of services, and possible ways forward are examined in the last three parts of the report.

  16. Image-based environmental monitoring sensor application using an embedded wireless sensor network.

    Science.gov (United States)

    Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh

    2014-08-28

    This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  17. Image-Based Environmental Monitoring Sensor Application Using an Embedded Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Jeongyeup Paek

    2014-08-01

    Full Text Available This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet’s built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Jacinto Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  18. Cooperative Dynamics in Lattice-Embedded Scale-Free Networks

    International Nuclear Information System (INIS)

    Shang Lihui; Zhang Mingji; Yang Yanqing

    2009-01-01

    We investigate cooperative behaviors of lattice-embedded scale-free networking agents in the prisoner's dilemma game model by employing two initial strategy distribution mechanisms, which are specific distribution to the most connected sites (hubs) and random distribution. Our study indicates that the game dynamics crucially depends on the underlying spatial network structure with different strategy distribution mechanism. The cooperators' specific distribution contributes to an enhanced level of cooperation in the system compared with random one, and cooperation is robust to cooperators' specific distribution but fragile to defectors' specific distribution. Especially, unlike the specific case, increasing heterogeneity of network does not always favor the emergence of cooperation under random mechanism. Furthermore, we study the geographical effects and find that the graphically constrained network structure tends to improve the evolution of cooperation in random case and in specific one for a large temptation to defect.

  19. Mean-field analysis of orientation selectivity in inhibition-dominated networks of spiking neurons.

    Science.gov (United States)

    Sadeh, Sadra; Cardanobile, Stefano; Rotter, Stefan

    2014-01-01

    Mechanisms underlying the emergence of orientation selectivity in the primary visual cortex are highly debated. Here we study the contribution of inhibition-dominated random recurrent networks to orientation selectivity, and more generally to sensory processing. By simulating and analyzing large-scale networks of spiking neurons, we investigate tuning amplification and contrast invariance of orientation selectivity in these networks. In particular, we show how selective attenuation of the common mode and amplification of the modulation component take place in these networks. Selective attenuation of the baseline, which is governed by the exceptional eigenvalue of the connectivity matrix, removes the unspecific, redundant signal component and ensures the invariance of selectivity across different contrasts. Selective amplification of modulation, which is governed by the operating regime of the network and depends on the strength of coupling, amplifies the informative signal component and thus increases the signal-to-noise ratio. Here, we perform a mean-field analysis which accounts for this process.

  20. Putative regulatory sites unraveled by network-embedded thermodynamic analysis of metabolome data

    NARCIS (Netherlands)

    Kümmel, Anne; Panke, Sven; Heinemann, Matthias

    2006-01-01

    As one of the most recent members of the omics family, large-scale quantitative metabolomics data are currently complementing our systems biology data pool and offer the chance to integrate the metabolite level into the functional analysis of cellular networks. Network-embedded thermodynamic

  1. Estimation of network structure for signal propagations by the analysis of multichannel action potentials in cultured neural networks; Ta channel katsudo den`i kaiseki ni yoru baiyo shinkei kairomonai kofun denpa keiro no suitei

    Energy Technology Data Exchange (ETDEWEB)

    Konno, N.; Fukami, T.; Shiina, T. [University of Tsukuba, Tsukuba (Japan); Jinbo, Y. [Nippon Telegraph and Telephone Corp., Tokyo (Japan)

    1998-07-01

    We have fabricated a 64 embedded microelectrode-array substrate using semiconductor technology to investigate the biological signal processing in brain by using cultured neural networks of fetal rat neocortex in vitro. We analyzed temporal and spatial neural networks patterns cultured on electrode-array substrate and attempted to examine the network structure constituted by neurons and the propagating patterns of electrical activity induced by the electric stimulus. In the experiments, each microelectrode size was 30 {mu}m squared and 150{mu} m spaced. For stimulation, one of the electrodes was selected and current pulses were applied through an isolated circuit. After the network was cultured in about 50 days, responses of neurons to electric stimulus were monitored extracellularly through 64-channel electrode array. Data recorded at each electrode consist of several spike trains generated by different cells. Therefore, these trains were separated by using wavelet transform and template matching for each electrode. We referred the temporal patterns of generated spikes for each electrode to as `spike sequences`. Next, we compared With the spike sequences among multichannel data and visualized the Cultured neural networks structure by identifying the directions of propagations and cell connections. 15 refs., 9 figs.

  2. Detection of bursts in neuronal spike trains by the mean inter-spike interval method

    Institute of Scientific and Technical Information of China (English)

    Lin Chen; Yong Deng; Weihua Luo; Zhen Wang; Shaoqun Zeng

    2009-01-01

    Bursts are electrical spikes firing with a high frequency, which are the most important property in synaptic plasticity and information processing in the central nervous system. However, bursts are difficult to identify because bursting activities or patterns vary with phys-iological conditions or external stimuli. In this paper, a simple method automatically to detect bursts in spike trains is described. This method auto-adaptively sets a parameter (mean inter-spike interval) according to intrinsic properties of the detected burst spike trains, without any arbitrary choices or any operator judgrnent. When the mean value of several successive inter-spike intervals is not larger than the parameter, a burst is identified. By this method, bursts can be automatically extracted from different bursting patterns of cultured neurons on multi-electrode arrays, as accurately as by visual inspection. Furthermore, significant changes of burst variables caused by electrical stimulus have been found in spontaneous activity of neuronal network. These suggest that the mean inter-spike interval method is robust for detecting changes in burst patterns and characteristics induced by environmental alterations.

  3. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State

    Science.gov (United States)

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might

  4. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State.

    Science.gov (United States)

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the "within" versus "between" connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed "winnerless competition", which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a

  5. VLSI implementation of a bio-inspired olfactory spiking neural network.

    Science.gov (United States)

    Hsieh, Hung-Yi; Tang, Kea-Tiong

    2012-07-01

    This paper presents a low-power, neuromorphic spiking neural network (SNN) chip that can be integrated in an electronic nose system to classify odor. The proposed SNN takes advantage of sub-threshold oscillation and onset-latency representation to reduce power consumption and chip area, providing a more distinct output for each odor input. The synaptic weights between the mitral and cortical cells are modified according to an spike-timing-dependent plasticity learning rule. During the experiment, the odor data are sampled by a commercial electronic nose (Cyranose 320) and are normalized before training and testing to ensure that the classification result is only caused by learning. Measurement results show that the circuit only consumed an average power of approximately 3.6 μW with a 1-V power supply to discriminate odor data. The SNN has either a high or low output response for a given input odor, making it easy to determine whether the circuit has made the correct decision. The measurement result of the SNN chip and some well-known algorithms (support vector machine and the K-nearest neighbor program) is compared to demonstrate the classification performance of the proposed SNN chip.The mean testing accuracy is 87.59% for the data used in this paper.

  6. Temporal sequence learning in winner-take-all networks of spiking neurons demonstrated in a brain-based device.

    Science.gov (United States)

    McKinstry, Jeffrey L; Edelman, Gerald M

    2013-01-01

    Animal behavior often involves a temporally ordered sequence of actions learned from experience. Here we describe simulations of interconnected networks of spiking neurons that learn to generate patterns of activity in correct temporal order. The simulation consists of large-scale networks of thousands of excitatory and inhibitory neurons that exhibit short-term synaptic plasticity and spike-timing dependent synaptic plasticity. The neural architecture within each area is arranged to evoke winner-take-all (WTA) patterns of neural activity that persist for tens of milliseconds. In order to generate and switch between consecutive firing patterns in correct temporal order, a reentrant exchange of signals between these areas was necessary. To demonstrate the capacity of this arrangement, we used the simulation to train a brain-based device responding to visual input by autonomously generating temporal sequences of motor actions.

  7. Embedded systems handbook networked embedded systems

    CERN Document Server

    Zurawski, Richard

    2009-01-01

    Considered a standard industry resource, the Embedded Systems Handbook provided researchers and technicians with the authoritative information needed to launch a wealth of diverse applications, including those in automotive electronics, industrial automated systems, and building automation and control. Now a new resource is required to report on current developments and provide a technical reference for those looking to move the field forward yet again. Divided into two volumes to accommodate this growth, the Embedded Systems Handbook, Second Edition presents a comprehensive view on this area

  8. How well do mean field theories of spiking quadratic-integrate-and-fire networks work in realistic parameter regimes?

    Science.gov (United States)

    Grabska-Barwińska, Agnieszka; Latham, Peter E

    2014-06-01

    We use mean field techniques to compute the distribution of excitatory and inhibitory firing rates in large networks of randomly connected spiking quadratic integrate and fire neurons. These techniques are based on the assumption that activity is asynchronous and Poisson. For most parameter settings these assumptions are strongly violated; nevertheless, so long as the networks are not too synchronous, we find good agreement between mean field prediction and network simulations. Thus, much of the intuition developed for randomly connected networks in the asynchronous regime applies to mildly synchronous networks.

  9. Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems

    Directory of Open Access Journals (Sweden)

    Gabriel A. Fonseca Guerra

    2017-12-01

    Full Text Available Constraint satisfaction problems (CSP are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems. The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs, and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart.

  10. Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems.

    Science.gov (United States)

    Fonseca Guerra, Gabriel A; Furber, Steve B

    2017-01-01

    Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart.

  11. Overcoming barriers to scheduling embedded generation to support distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Wright, A.J.; Formby, J.R.

    2000-07-01

    Current scheduling of embedded generation for distribution in the UK is limited and patchy. Some DNOs actively schedule while others do none. The literature on the subject is mainly about accommodating volatile wind output, and optimising island systems, for both cost of supply and network stability. The forthcoming NETA will lower prices, expose unpredictable generation to imbalance markets and could introduce punitive constraint payments on DNOs, but at the same time create a dynamic market for both power and ancillary services from embedded generators. Most renewable generators either run as base load (e.g. waste ) or according to the vagaries of the weather (e.g. wind, hydro), so offer little scope for scheduling other than 'off'. CHP plant is normally heat- led for industrial processes or building needs, but supplementary firing or thermal storage often allow considerable scope for scheduling. Micro-CHP with thermal storage could provide short-term scheduling, but tends to be running anyway during the evening peak. Standby generation appears to be ideal for scheduling, but in practice operators may be unwilling to run parallel with the network, and noise and pollution problems may preclude frequent operation. Statistical analysis can be applied to calculate the reliability of several generators compared to one; with a large number of generators such as micro-CHP reliability of a proportion of load is close to unity. The type of communication for generation used will depend on requirements for bandwidth, cost, reliability and whether it is bundled with other services. With high levels of deeply embedded, small-scale generation using induction machines, voltage control and black start capability will become important concerns on 11 kV and LV networks. This will require increased generation monitoring and remote control of switchgear. Examples of cost benefits from scheduling are given, including deferred reinforcement, increased exports on non

  12. SPICODYN: A Toolbox for the Analysis of Neuronal Network Dynamics and Connectivity from Multi-Site Spike Signal Recordings.

    Science.gov (United States)

    Pastore, Vito Paolo; Godjoski, Aleksandar; Martinoia, Sergio; Massobrio, Paolo

    2018-01-01

    We implemented an automated and efficient open-source software for the analysis of multi-site neuronal spike signals. The software package, named SPICODYN, has been developed as a standalone windows GUI application, using C# programming language with Microsoft Visual Studio based on .NET framework 4.5 development environment. Accepted input data formats are HDF5, level 5 MAT and text files, containing recorded or generated time series spike signals data. SPICODYN processes such electrophysiological signals focusing on: spiking and bursting dynamics and functional-effective connectivity analysis. In particular, for inferring network connectivity, a new implementation of the transfer entropy method is presented dealing with multiple time delays (temporal extension) and with multiple binary patterns (high order extension). SPICODYN is specifically tailored to process data coming from different Multi-Electrode Arrays setups, guarantying, in those specific cases, automated processing. The optimized implementation of the Delayed Transfer Entropy and the High-Order Transfer Entropy algorithms, allows performing accurate and rapid analysis on multiple spike trains from thousands of electrodes.

  13. A Data Gathering Scheme in Wireless Sensor Networks Based on Synchronization of Chaotic Spiking Oscillator Networks

    International Nuclear Information System (INIS)

    Nakano, Hidehiro; Utani, Akihide; Miyauchi, Arata; Yamamoto, Hisao

    2011-01-01

    This paper studies chaos-based data gathering scheme in multiple sink wireless sensor networks. In the proposed scheme, each wireless sensor node has a simple chaotic oscillator. The oscillators generate spike signals with chaotic interspike intervals, and are impulsively coupled by the signals via wireless communication. Each wireless sensor node transmits and receives sensor information only in the timing of the couplings. The proposed scheme can exhibit various chaos synchronous phenomena and their breakdown phenomena, and can effectively gather sensor information with the significantly small number of transmissions and receptions compared with the conventional scheme. Also, the proposed scheme can flexibly adapt various wireless sensor networks not only with a single sink node but also with multiple sink nodes. This paper introduces our previous works. Through simulation experiments, we show effectiveness of the proposed scheme and discuss its development potential.

  14. Recovery of Dynamics and Function in Spiking Neural Networks with Closed-Loop Control.

    Science.gov (United States)

    Vlachos, Ioannis; Deniz, Taşkin; Aertsen, Ad; Kumar, Arvind

    2016-02-01

    There is a growing interest in developing novel brain stimulation methods to control disease-related aberrant neural activity and to address basic neuroscience questions. Conventional methods for manipulating brain activity rely on open-loop approaches that usually lead to excessive stimulation and, crucially, do not restore the original computations performed by the network. Thus, they are often accompanied by undesired side-effects. Here, we introduce delayed feedback control (DFC), a conceptually simple but effective method, to control pathological oscillations in spiking neural networks (SNNs). Using mathematical analysis and numerical simulations we show that DFC can restore a wide range of aberrant network dynamics either by suppressing or enhancing synchronous irregular activity. Importantly, DFC, besides steering the system back to a healthy state, also recovers the computations performed by the underlying network. Finally, using our theory we identify the role of single neuron and synapse properties in determining the stability of the closed-loop system.

  15. Image recovery using diffusion equation embedded neural network

    International Nuclear Information System (INIS)

    Torkamani-Azar, F.

    2001-01-01

    Artificial neural networks with their inherent parallelism have been shown to perform well in many processing applications. In this paper, a new self-organizing approach for the recovery of gray level images degraded by additive noise based on embedding the diffusion equation in a neural network (without using a priori knowledge about the image point spread function, noise or original image) is described which enhances and restores gray levels of degraded images and is for application in low-level processing. Two learning features have been proposed which would be effective in the practical implementation of such a network. The recovery procedure needs some parameter estimation such as different error goals. While the required computation is not excessive, the procedure dose not require too many iterations and convergence is very fast. In addition, through the simulation the new network showed that it has superior ability to give a better quality result with a minimum of the sum of the squared error

  16. Development of an Embedded Networked Sensing System for Structural Health Monitoring

    OpenAIRE

    Whang, Daniel; Xu, Ning; Rangwala, Sumit; Chintalapudi, Krishna; Govindan, Ramesh; Wallace, J W

    2004-01-01

    An innovative networked embedded sensing system for structural health monitoring is currently being developed. This sensor network has been prototyped in the laboratory, and will be deployed in a series of forced-vibration tests involving a full-scale, four-story office building in the next coming months. The low-power wireless seismic sensor system enables the acquisition of 15–30 channels of 16-bit accelerometer data at 128 Hz over a wireless network. The advantage of such a system is its t...

  17. Efficient transmission of subthreshold signals in complex networks of spiking neurons.

    Science.gov (United States)

    Torres, Joaquin J; Elices, Irene; Marro, J

    2015-01-01

    We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.

  18. Efficient transmission of subthreshold signals in complex networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Joaquin J Torres

    Full Text Available We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.

  19. An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks.

    Science.gov (United States)

    Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi

    2017-01-01

    In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments.

  20. Dynamic Control of Synchronous Activity in Networks of Spiking Neurons.

    Directory of Open Access Journals (Sweden)

    Axel Hutt

    Full Text Available Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.

  1. Dynamic Control of Synchronous Activity in Networks of Spiking Neurons.

    Science.gov (United States)

    Hutt, Axel; Mierau, Andreas; Lefebvre, Jérémie

    Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.

  2. Messaging Performance of FIPA Interaction Protocols in Networked Embedded Controllers

    Directory of Open Access Journals (Sweden)

    García JoséAPérez

    2008-01-01

    Full Text Available Abstract Agent-based technologies in production control systems could facilitate seamless reconfiguration and integration of mechatronic devices/modules into systems. Advances in embedded controllers which are continuously improving computational capabilities allow for software modularization and distribution of decisions. Agent platforms running on embedded controllers could hide the complexity of bootstrap and communication. Therefore, it is important to investigate the messaging performance of the agents whose main motivation is the resource allocation in manufacturing systems (i.e., conveyor system. The tests were implemented using the FIPA-compliant JADE-LEAP agent platform. Agent containers were distributed through networked embedded controllers, and agents were communicating using request and contract-net FIPA interaction protocols. The test scenarios are organized in intercontainer and intracontainer communications. The work shows the messaging performance for the different test scenarios using both interaction protocols.

  3. Messaging Performance of FIPA Interaction Protocols in Networked Embedded Controllers

    Directory of Open Access Journals (Sweden)

    Omar Jehovani López Orozco

    2007-12-01

    Full Text Available Agent-based technologies in production control systems could facilitate seamless reconfiguration and integration of mechatronic devices/modules into systems. Advances in embedded controllers which are continuously improving computational capabilities allow for software modularization and distribution of decisions. Agent platforms running on embedded controllers could hide the complexity of bootstrap and communication. Therefore, it is important to investigate the messaging performance of the agents whose main motivation is the resource allocation in manufacturing systems (i.e., conveyor system. The tests were implemented using the FIPA-compliant JADE-LEAP agent platform. Agent containers were distributed through networked embedded controllers, and agents were communicating using request and contract-net FIPA interaction protocols. The test scenarios are organized in intercontainer and intracontainer communications. The work shows the messaging performance for the different test scenarios using both interaction protocols.

  4. Low-complexity object detection with deep convolutional neural network for embedded systems

    Science.gov (United States)

    Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong

    2017-09-01

    We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.

  5. Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems

    Directory of Open Access Journals (Sweden)

    Emre eNeftci

    2014-01-01

    Full Text Available Restricted Boltzmann Machines (RBMs and Deep Belief Networks have been demonstrated to perform efficiently in variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The reverberating activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP carries out the weight updates in an online, asynchronous fashion.We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

  6. Event-driven contrastive divergence for spiking neuromorphic systems.

    Science.gov (United States)

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2013-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

  7. Implementing 802.11 with microcontrollers wireless networking for embedded systems designers

    CERN Document Server

    Eady, Fred

    2005-01-01

    Wireless networking is poised to have a massive impact on communications, and the 802.11 standard is to wireless networking what Ethernet is to wired networking. There are already over 50 million devices using the dominant IEEE 802.11 (essentially wireless Ethernet) standard, with astronomical growth predicted over the next 10 years. New applications are emerging every day, with wireless capability being embedded in everything from electric meters to hospital patient tracking systems to security devices. This practical reference guides readers through the wireless technology forest, gi

  8. Wireless and embedded carbon nanotube networks for damage detection in concrete structures

    International Nuclear Information System (INIS)

    Saafi, Mohamed

    2009-01-01

    Concrete structures undergo an uncontrollable damage process manifesting in the form of cracks due to the coupling of fatigue loading and environmental effects. In order to achieve long-term durability and performance, continuous health monitoring systems are needed to make critical decisions regarding operation, maintenance and repairs. Recent advances in nanostructured materials such as carbon nanotubes have opened the door for new smart and advanced sensing materials that could effectively be used in health monitoring of structures where wireless and real time sensing could provide information on damage development. In this paper, carbon nanotube networks were embedded into a cement matrix to develop an in situ wireless and embedded sensor for damage detection in concrete structures. By wirelessly measuring the change in the electrical resistance of the carbon nanotube networks, the progress of damage can be detected and monitored. As a proof of concept, wireless cement-carbon nanotube sensors were embedded into concrete beams and subjected to monotonic and cyclic loading to evaluate the effect of damage on their response. Experimental results showed that the wireless response of the embedded nanotube sensors changes due to the formation of cracks during loading. In addition, the nanotube sensors were able to detect the initiation of damage at an early stage of loading.

  9. Wireless and embedded carbon nanotube networks for damage detection in concrete structures

    Science.gov (United States)

    Saafi, Mohamed

    2009-09-01

    Concrete structures undergo an uncontrollable damage process manifesting in the form of cracks due to the coupling of fatigue loading and environmental effects. In order to achieve long-term durability and performance, continuous health monitoring systems are needed to make critical decisions regarding operation, maintenance and repairs. Recent advances in nanostructured materials such as carbon nanotubes have opened the door for new smart and advanced sensing materials that could effectively be used in health monitoring of structures where wireless and real time sensing could provide information on damage development. In this paper, carbon nanotube networks were embedded into a cement matrix to develop an in situ wireless and embedded sensor for damage detection in concrete structures. By wirelessly measuring the change in the electrical resistance of the carbon nanotube networks, the progress of damage can be detected and monitored. As a proof of concept, wireless cement-carbon nanotube sensors were embedded into concrete beams and subjected to monotonic and cyclic loading to evaluate the effect of damage on their response. Experimental results showed that the wireless response of the embedded nanotube sensors changes due to the formation of cracks during loading. In addition, the nanotube sensors were able to detect the initiation of damage at an early stage of loading.

  10. Quantitative Analysis of Signaling Networks across Differentially Embedded Tumors Highlights Interpatient Heterogeneity in Human Glioblastoma

    Science.gov (United States)

    2015-01-01

    Glioblastoma multiforme (GBM) is the most aggressive malignant primary brain tumor, with a dismal mean survival even with the current standard of care. Although in vitro cell systems can provide mechanistic insight into the regulatory networks governing GBM cell proliferation and migration, clinical samples provide a more physiologically relevant view of oncogenic signaling networks. However, clinical samples are not widely available and may be embedded for histopathologic analysis. With the goal of accurately identifying activated signaling networks in GBM tumor samples, we investigated the impact of embedding in optimal cutting temperature (OCT) compound followed by flash freezing in LN2 vs immediate flash freezing (iFF) in LN2 on protein expression and phosphorylation-mediated signaling networks. Quantitative proteomic and phosphoproteomic analysis of 8 pairs of tumor specimens revealed minimal impact of the different sample processing strategies and highlighted the large interpatient heterogeneity present in these tumors. Correlation analyses of the differentially processed tumor sections identified activated signaling networks present in selected tumors and revealed the differential expression of transcription, translation, and degradation associated proteins. This study demonstrates the capability of quantitative mass spectrometry for identification of in vivo oncogenic signaling networks from human tumor specimens that were either OCT-embedded or immediately flash-frozen. PMID:24927040

  11. Visual language recognition with a feed-forward network of spiking neurons

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, Craig E [Los Alamos National Laboratory; Garrett, Kenyan [Los Alamos National Laboratory; Sottile, Matthew [GALOIS; Shreyas, Ns [INDIANA UNIV.

    2010-01-01

    An analogy is made and exploited between the recognition of visual objects and language parsing. A subset of regular languages is used to define a one-dimensional 'visual' language, in which the words are translational and scale invariant. This allows an exploration of the viewpoint invariant languages that can be solved by a network of concurrent, hierarchically connected processors. A language family is defined that is hierarchically tiling system recognizable (HREC). As inspired by nature, an algorithm is presented that constructs a cellular automaton that recognizes strings from a language in the HREC family. It is demonstrated how a language recognizer can be implemented from the cellular automaton using a feed-forward network of spiking neurons. This parser recognizes fixed-length strings from the language in parallel and as the computation is pipelined, a new string can be parsed in each new interval of time. The analogy with formal language theory allows inferences to be drawn regarding what class of objects can be recognized by visual cortex operating in purely feed-forward fashion and what class of objects requires a more complicated network architecture.

  12. The effect of an exogenous magnetic field on neural coding in deep spiking neural networks.

    Science.gov (United States)

    Guo, Lei; Zhang, Wei; Zhang, Jialei

    2018-01-01

    A ten-layer feed forward network is constructed in the presence of an exogenous alternating magnetic field. Specifically, our results indicate that for rate coding, the firing rate is significantly increased in the presence of an exogenous alternating magnetic field and particularly with increasing enhancement of the alternating magnetic field amplitude. For temporal coding, the interspike intervals of the spiking sequence are decreased and the distribution of the interspike intervals of the spiking sequence tends to be uniform in the presence of alternating magnetic field.

  13. Computing with networks of spiking neurons on a biophysically motivated floating-gate based neuromorphic integrated circuit.

    Science.gov (United States)

    Brink, S; Nease, S; Hasler, P

    2013-09-01

    Results are presented from several spiking network experiments performed on a novel neuromorphic integrated circuit. The networks are discussed in terms of their computational significance, which includes applications such as arbitrary spatiotemporal pattern generation and recognition, winner-take-all competition, stable generation of rhythmic outputs, and volatile memory. Analogies to the behavior of real biological neural systems are also noted. The alternatives for implementing the same computations are discussed and compared from a computational efficiency standpoint, with the conclusion that implementing neural networks on neuromorphic hardware is significantly more power efficient than numerical integration of model equations on traditional digital hardware. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. The behaviour of basic autocatalytic signalling modules in isolation and embedded in networks

    Energy Technology Data Exchange (ETDEWEB)

    Krishnan, J. [Department of Chemical Engineering, Centre for Process Systems Engineering, Institute for Systems and Synthetic Biology, Imperial College London, London SW7 2AZ (United Kingdom); Mois, Kristina; Suwanmajo, Thapanar [Department of Chemical Engineering, Centre for Process Systems Engineering, Imperial College London, London SW7 2AZ (United Kingdom)

    2014-11-07

    In this paper, we examine the behaviour of basic autocatalytic feedback modules involving a species catalyzing its own production, either directly or indirectly. We first perform a systematic study of the autocatalytic feedback module in isolation, examining the effect of different factors, showing how this module is capable of exhibiting monostable threshold and bistable switch-like behaviour. We then study the behaviour of this module embedded in different kinds of basic networks including (essentially) irreversible cycles, open and closed reversible chains, and networks with additional feedback. We study the behaviour of the networks deterministically and also stochastically, using simulations, analytical work, and bifurcation analysis. We find that (i) there are significant differences between the behaviour of this module in isolation and in a network: thresholds may be altered or destroyed and bistability may be destroyed or even induced, even when the ambient network is simple. The global characteristics and topology of this network and the position of the module in the ambient network can play important and unexpected roles. (ii) There can be important differences between the deterministic and stochastic dynamics of the module embedded in networks, which may be accentuated by the ambient network. This provides new insights into the functioning of such enzymatic modules individually and as part of networks, with relevance to other enzymatic signalling modules as well.

  15. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  16. A FPGA Embedded Web Server for Remote Monitoring and Control of Smart Sensors Networks

    Science.gov (United States)

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2014-01-01

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology. PMID:24379047

  17. A FPGA embedded web server for remote monitoring and control of smart sensors networks.

    Science.gov (United States)

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2013-12-27

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology.

  18. A FPGA Embedded Web Server for Remote Monitoring and Control of Smart Sensors Networks

    Directory of Open Access Journals (Sweden)

    Eduardo Magdaleno

    2013-12-01

    Full Text Available This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI. The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A. Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology.

  19. Poisson-Like Spiking in Circuits with Probabilistic Synapses

    Science.gov (United States)

    Moreno-Bote, Rubén

    2014-01-01

    Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705

  20. Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights.

    Science.gov (United States)

    Samadi, Arash; Lillicrap, Timothy P; Tweed, Douglas B

    2017-03-01

    Recent work in computer science has shown the power of deep learning driven by the backpropagation algorithm in networks of artificial neurons. But real neurons in the brain are different from most of these artificial ones in at least three crucial ways: they emit spikes rather than graded outputs, their inputs and outputs are related dynamically rather than by piecewise-smooth functions, and they have no known way to coordinate arrays of synapses in separate forward and feedback pathways so that they change simultaneously and identically, as they do in backpropagation. Given these differences, it is unlikely that current deep learning algorithms can operate in the brain, but we that show these problems can be solved by two simple devices: learning rules can approximate dynamic input-output relations with piecewise-smooth functions, and a variation on the feedback alignment algorithm can train deep networks without having to coordinate forward and feedback synapses. Our results also show that deep spiking networks learn much better if each neuron computes an intracellular teaching signal that reflects that cell's nonlinearity. With this mechanism, networks of spiking neurons show useful learning in synapses at least nine layers upstream from the output cells and perform well compared to other spiking networks in the literature on the MNIST digit recognition task.

  1. A study of epileptogenic network structures in rat hippocampal cultures using first spike latencies during synchronization events

    International Nuclear Information System (INIS)

    Raghavan, Mohan; Amrutur, Bharadwaj; Srinivas, Kalyan V; Sikdar, Sujit K

    2012-01-01

    Study of hypersynchronous activity is of prime importance for combating epilepsy. Studies on network structure typically reconstruct the network by measuring various aspects of the interaction between neurons and subsequently measure the properties of the reconstructed network. In sub-sampled networks such methods lead to significant errors in reconstruction. Using rat hippocampal neurons cultured on a multi-electrode array dish and a glutamate injury model of epilepsy in vitro, we studied synchronous activity in neuronal networks. Using the first spike latencies in various neurons during a network burst, we extract various recurring spatio-temporal onset patterns in the networks. Comparing the patterns seen in control and injured networks, we observe that injured networks express a wide diversity in their foci (origin) and activation pattern, while control networks show limited diversity. Furthermore, we note that onset patterns in glutamate injured networks show a positive correlation between synchronization delay and physical distance between neurons, while control networks do not. (paper)

  2. Distribution of orientation selectivity in recurrent networks of spiking neurons with different random topologies.

    Science.gov (United States)

    Sadeh, Sadra; Rotter, Stefan

    2014-01-01

    Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity.

  3. SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.

    Science.gov (United States)

    Jimenez-Romero, Cristian; Johnson, Jeffrey

    2017-01-01

    The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.

  4. Spike Neural Models Part II: Abstract Neural Models

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2018-02-01

    Full Text Available Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF model which is not biologically realistic but does quickly and easily integrate input to produce spikes. Izhikevich's model is based on Hodgkin-Huxley's model but simplified such that it uses only two differentiation equations and four parameters to produce various realistic spike patterns. LIF is based on a standard electrical circuit and contains one equation. Either of these two models, or any of the many other models in literature can be used in a SNN. Choosing a neural model is an important task that depends on the goal of the research and the resources available. Once a model is chosen, network decisions such as connectivity, delay, and sparseness, need to be made. Understanding neural models and how they are incorporated into the network is the first step in creating a SNN.

  5. Spiking and bursting patterns of fractional-order Izhikevich model

    Science.gov (United States)

    Teka, Wondimu W.; Upadhyay, Ranjit Kumar; Mondal, Argha

    2018-03-01

    Bursting and spiking oscillations play major roles in processing and transmitting information in the brain through cortical neurons that respond differently to the same signal. These oscillations display complex dynamics that might be produced by using neuronal models and varying many model parameters. Recent studies have shown that models with fractional order can produce several types of history-dependent neuronal activities without the adjustment of several parameters. We studied the fractional-order Izhikevich model and analyzed different kinds of oscillations that emerge from the fractional dynamics. The model produces a wide range of neuronal spike responses, including regular spiking, fast spiking, intrinsic bursting, mixed mode oscillations, regular bursting and chattering, by adjusting only the fractional order. Both the active and silent phase of the burst increase when the fractional-order model further deviates from the classical model. For smaller fractional order, the model produces memory dependent spiking activity after the pulse signal turned off. This special spiking activity and other properties of the fractional-order model are caused by the memory trace that emerges from the fractional-order dynamics and integrates all the past activities of the neuron. On the network level, the response of the neuronal network shifts from random to scale-free spiking. Our results suggest that the complex dynamics of spiking and bursting can be the result of the long-term dependence and interaction of intracellular and extracellular ionic currents.

  6. Emergent properties of interacting populations of spiking neurons

    Directory of Open Access Journals (Sweden)

    Stefano eCardanobile

    2011-12-01

    Full Text Available Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system.Here, we discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks on the population level is faithfully reflected by a set of non-linear rate equations, describing all interactions on this level. These equations, in turn, are similar in structure to the Lotka-Volterra equations, well known by their use in modeling predator-prey relationships in population biology, but abundant applications to economic theory have also been described.We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of neural populations.

  7. Emergent properties of interacting populations of spiking neurons.

    Science.gov (United States)

    Cardanobile, Stefano; Rotter, Stefan

    2011-01-01

    Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations.

  8. A Spike Neural Controller for Traffic Load Parameter with Priority-Based Rate in Wireless Multimedia Sensor Networks

    Directory of Open Access Journals (Sweden)

    Nadia Adnan Shiltagh

    2015-11-01

    Full Text Available Wireless Multimedia Sensor Networks (WMSNs are a type of sensor network that contains sensor nodes equipped with cameras, microphones; therefore the WMSNS are able to produce multimedia data such as video and audio streams, still images, and scalar data from the surrounding environment. Most multimedia applications typically produce huge volumes of data, this leads to congestion. To address this challenge, This paper proposes Modify Spike Neural Network control for Traffic Load Parameter with Exponential Weight of Priority Based Rate Control algorithm (MSNTLP with EWBPRC. The Modify Spike Neural Network controller (MSNC can calculate the appropriate traffic load parameter μ for each parent node and then use in the EWPBRC algorithm to estimate the transmission rate of parent nodes and then assign a suitable transmission rate for each child node. A comparative study between (MSNTLP with EWBPRC and fuzzy logic controller for traffic load parameter with Exponential Weight of Priority Based Rate Control algorithm (FTLP with EWBPRC algorithm shows that the (MSNTLP with EWBPRC is more efficient than (FTLP with EWBPRC algorithm in terms of packet loss, queue delay and throughput. Another comparative study between (MSNTLP with EWBPRC and EWBPRC with fixed traffic load parameter (µ shows that the MSNTLP with EWBPRC is more efficient than EWBPRC with fixed traffic load parameter (µ in terms of packet loss ratio and queue delay. A simulation process is developed and tested using the network simulator _2 (NS2 in a computer having the following properties: windows 7 (64-bit, core i7, RAM 8GB, hard 1TB.

  9. Extraction of Inter-Aural Time Differences Using a Spiking Neuron Network Model of the Medial Superior Olive

    Directory of Open Access Journals (Sweden)

    Jörg Encke

    2018-03-01

    Full Text Available The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs which are first analyzed in the medial superior olive (MSO in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.

  10. Integrated workflows for spiking neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Ján eAntolík

    2013-12-01

    Full Text Available The increasing availability of computational resources is enabling more detailed, realistic modelling in computational neuroscience, resulting in a shift towards more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeller's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modellers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity.To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualisation into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organised configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualisation stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modelling studies by relieving the user from manual handling of the flow of metadata between the individual

  11. Crosstalk-aware virtual network embedding over inter-datacenter optical networks with few-mode fibers

    Science.gov (United States)

    Huang, Haibin; Guo, Bingli; Li, Xin; Yin, Shan; Zhou, Yu; Huang, Shanguo

    2017-12-01

    Virtualization of datacenter (DC) infrastructures enables infrastructure providers (InPs) to provide novel services like virtual networks (VNs). Furthermore, optical networks have been employed to connect the metro-scale geographically distributed DCs. The synergistic virtualization of the DC infrastructures and optical networks enables the efficient VN service over inter-DC optical networks (inter-DCONs). While the capacity of the used standard single-mode fiber (SSMF) is limited by their nonlinear characteristics. Thus, mode-division multiplexing (MDM) technology based on few-mode fibers (FMFs) could be employed to increase the capacity of optical networks. Whereas, modal crosstalk (XT) introduced by optical fibers and components deployed in the MDM optical networks impacts the performance of VN embedding (VNE) over inter-DCONs with FMFs. In this paper, we propose a XT-aware VNE mechanism over inter-DCONs with FMFs. The impact of XT is considered throughout the VNE procedures. The simulation results show that the proposed XT-aware VNE can achieves better performances of blocking probability and spectrum utilization compared to conventional VNE mechanisms.

  12. Persistence and storage of activity patterns in spiking recurrent cortical networks: modulation of sigmoid signals by after-hyperpolarization currents and acetylcholine.

    Science.gov (United States)

    Palma, Jesse; Grossberg, Stephen; Versace, Massimiliano

    2012-01-01

    Many cortical networks contain recurrent architectures that transform input patterns before storing them in short-term memory (STM). Theorems in the 1970's showed how feedback signal functions in rate-based recurrent on-center off-surround networks control this process. A sigmoid signal function induces a quenching threshold below which inputs are suppressed as noise and above which they are contrast-enhanced before pattern storage. This article describes how changes in feedback signaling, neuromodulation, and recurrent connectivity may alter pattern processing in recurrent on-center off-surround networks of spiking neurons. In spiking neurons, fast, medium, and slow after-hyperpolarization (AHP) currents control sigmoid signal threshold and slope. Modulation of AHP currents by acetylcholine (ACh) can change sigmoid shape and, with it, network dynamics. For example, decreasing signal function threshold and increasing slope can lengthen the persistence of a partially contrast-enhanced pattern, increase the number of active cells stored in STM, or, if connectivity is distance-dependent, cause cell activities to cluster. These results clarify how cholinergic modulation by the basal forebrain may alter the vigilance of category learning circuits, and thus their sensitivity to predictive mismatches, thereby controlling whether learned categories code concrete or abstract features, as predicted by Adaptive Resonance Theory. The analysis includes global, distance-dependent, and interneuron-mediated circuits. With an appropriate degree of recurrent excitation and inhibition, spiking networks maintain a partially contrast-enhanced pattern for 800 ms or longer after stimuli offset, then resolve to no stored pattern, or to winner-take-all (WTA) stored patterns with one or multiple winners. Strengthening inhibition prolongs a partially contrast-enhanced pattern by slowing the transition to stability, while strengthening excitation causes more winners when the network

  13. Persistence and storage of activity patterns in spiking recurrent cortical networks:Modulation of sigmoid signals by after-hyperpolarization currents and acetylcholine

    Directory of Open Access Journals (Sweden)

    Jesse ePalma

    2012-06-01

    Full Text Available Many cortical networks contain recurrent architectures that transform input patterns before storing them in short-term memory (STM. Theorems in the 1970’s showed how feedback signal functions in rate-based recurrent on-center off-surround networks control this process. A sigmoid signal function induces a quenching threshold below which inputs are suppressed as noise and above which they are contrast-enhanced before pattern storage. This article describes how changes in feedback signaling, neuromodulation, and recurrent connectivity may alter pattern processing in recurrent on-center off-surround networks of spiking neurons. In spiking neurons, fast, medium, and slow after-hyperpolarization (AHP currents control sigmoid signal threshold and slope. Modulation of AHP currents by acetylcholine (ACh can change sigmoid shape and, with it, network dynamics. For example, decreasing signal function threshold and increasing slope can lengthen the persistence of a partially contrast-enhanced pattern, increase the number of active cells stored in STM, or, if connectivity is distance-dependent, cause cell activities to cluster. These results clarify how cholinergic modulation by the basal forebrain may alter the vigilance of category learning circuits, and thus their sensitivity to predictive mismatches, thereby controlling whether learned categories code concrete or abstract features, as predicted by Adaptive Resonance Theory. The analysis includes global, distance-dependent, and interneuron-mediated circuits. With an appropriate degree of recurrent excitation and inhibition, spiking networks maintain a partially contrast-enhanced pattern for 800 milliseconds or longer after stimuli offset, then resolve to no stored pattern, or to winner-take-all stored patterns with one or multiple winners. Strengthening inhibition prolongs a partially contrast-enhanced pattern by slowing the transition to stability, while strengthening excitation causes more winners

  14. Spike Bursts from an Excitable Optical System

    Science.gov (United States)

    Rios Leite, Jose R.; Rosero, Edison J.; Barbosa, Wendson A. S.; Tredicce, Jorge R.

    Diode Lasers with double optical feedback are shown to present power drop spikes with statistical distribution controllable by the ratio of the two feedback times. The average time between spikes and the variance within long time series are studied. The system is shown to be excitable and present bursting of spikes created with specific feedback time ratios and strength. A rate equation model, extending the Lang-Kobayashi single feedback for semiconductor lasers proves to match the experimental observations. Potential applications to construct network to mimic neural systems having controlled bursting properties in each unit will be discussed. Brazilian Agency CNPQ.

  15. Just-in-time connectivity for large spiking networks.

    Science.gov (United States)

    Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L

    2008-11-01

    The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.

  16. Detecting dependencies between spike trains of pairs of neurons through copulas

    DEFF Research Database (Denmark)

    Sacerdote, Laura; Tamborrino, Massimiliano; Zucca, Cristina

    2011-01-01

    The dynamics of a neuron are influenced by the connections with the network where it lies. Recorded spike trains exhibit patterns due to the interactions between neurons. However, the structure of the network is not known. A challenging task is to investigate it from the analysis of simultaneously...... the two neurons. Furthermore, the method recognizes the presence of delays in the spike propagation....

  17. A low complexity method for the optimization of network path length in spatially embedded networks

    International Nuclear Information System (INIS)

    Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li; Ming, Yong; Chen, Sheng-Yong; Wang, Wan-Liang

    2014-01-01

    The average path length of a network is an important index reflecting the network transmission efficiency. In this paper, we propose a new method of decreasing the average path length by adding edges. A new indicator is presented, incorporating traffic flow demand, to assess the decrease in the average path length when a new edge is added during the optimization process. With the help of the indicator, edges are selected and added into the network one by one. The new method has a relatively small time computational complexity in comparison with some traditional methods. In numerical simulations, the new method is applied to some synthetic spatially embedded networks. The result shows that the method can perform competitively in decreasing the average path length. Then, as an example of an application of this new method, it is applied to the road network of Hangzhou, China. (paper)

  18. Modeling and Design of Fault-Tolerant and Self-Adaptive Reconfigurable Networked Embedded Systems

    Directory of Open Access Journals (Sweden)

    Jürgen Teich

    2006-06-01

    Full Text Available Automotive, avionic, or body-area networks are systems that consist of several communicating control units specialized for certain purposes. Typically, different constraints regarding fault tolerance, availability and also flexibility are imposed on these systems. In this article, we will present a novel framework for increasing fault tolerance and flexibility by solving the problem of hardware/software codesign online. Based on field-programmable gate arrays (FPGAs in combination with CPUs, we allow migrating tasks implemented in hardware or software from one node to another. Moreover, if not enough hardware/software resources are available, the migration of functionality from hardware to software or vice versa is provided. Supporting such flexibility through services integrated in a distributed operating system for networked embedded systems is a substantial step towards self-adaptive systems. Beside the formal definition of methods and concepts, we describe in detail a first implementation of a reconfigurable networked embedded system running automotive applications.

  19. Coincidence Detection Using Spiking Neurons with Application to Face Recognition

    Directory of Open Access Journals (Sweden)

    Fadhlan Kamaruzaman

    2015-01-01

    Full Text Available We elucidate the practical implementation of Spiking Neural Network (SNN as local ensembles of classifiers. Synaptic time constant τs is used as learning parameter in representing the variations learned from a set of training data at classifier level. This classifier uses coincidence detection (CD strategy trained in supervised manner using a novel supervised learning method called τs Prediction which adjusts the precise timing of output spikes towards the desired spike timing through iterative adaptation of τs. This paper also discusses the approximation of spike timing in Spike Response Model (SRM for the purpose of coincidence detection. This process significantly speeds up the whole process of learning and classification. Performance evaluations with face datasets such as AR, FERET, JAFFE, and CK+ datasets show that the proposed method delivers better face classification performance than the network trained with Supervised Synaptic-Time Dependent Plasticity (STDP. We also found that the proposed method delivers better classification accuracy than k nearest neighbor, ensembles of kNN, and Support Vector Machines. Evaluation on several types of spike codings also reveals that latency coding delivers the best result for face classification as well as for classification of other multivariate datasets.

  20. Spike Pattern Structure Influences Synaptic Efficacy Variability Under STDP and Synaptic Homeostasis. I: Spike Generating Models on Converging Motifs

    Directory of Open Access Journals (Sweden)

    Zedong eBi

    2016-02-01

    Full Text Available In neural systems, synaptic plasticity is usually driven by spike trains. Due to the inherent noises of neurons and synapses as well as the randomness of connection details, spike trains typically exhibit variability such as spatial randomness and temporal stochasticity, resulting in variability of synaptic changes under plasticity, which we call efficacy variability. How the variability of spike trains influences the efficacy variability of synapses remains unclear. In this paper, we try to understand this influence under pair-wise additive spike-timing dependent plasticity (STDP when the mean strength of plastic synapses into a neuron is bounded (synaptic homeostasis. Specifically, we systematically study, analytically and numerically, how four aspects of statistical features, i.e. synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations, as well as their interactions influence the efficacy variability in converging motifs (simple networks in which one neuron receives from many other neurons. Neurons (including the post-synaptic neuron in a converging motif generate spikes according to statistical models with tunable parameters. In this way, we can explicitly control the statistics of the spike patterns, and investigate their influence onto the efficacy variability, without worrying about the feedback from synaptic changes onto the dynamics of the post-synaptic neuron. We separate efficacy variability into two parts: the drift part (DriftV induced by the heterogeneity of change rates of different synapses, and the diffusion part (DiffV induced by weight diffusion caused by stochasticity of spike trains. Our main findings are: (1 synchronous firing and burstiness tend to increase DiffV, (2 heterogeneity of rates induces DriftV when potentiation and depression in STDP are not balanced, and (3 heterogeneity of cross-correlations induces DriftV together with heterogeneity of rates. We anticipate our

  1. In-reactor creep of zirconium alloys by thermal spikes

    International Nuclear Information System (INIS)

    Ibrahim, E.F.

    1975-01-01

    The size and duration of thermal spikes from fast neutrons have been calculated for zirconium alloys, showing that spikes up to 1.8 nm radius may exist for 2 x 10 -11 s at greater than melting point, at 570K ambient temperature. Creep rates have been calculated assuming that the elastic strain from the applied stress relaxes in the volume of the spikes (by preferential loop alignment or modification of an existing dislocation network). The calculated rates are consistent with strain rates observed in long term tests-in-reactor, if spike lifetimes are 2 to 2.5 x 10 -11 s. (Auth.)

  2. Stochastic Spiking Neural Networks Enabled by Magnetic Tunnel Junctions: From Nontelegraphic to Telegraphic Switching Regimes

    Science.gov (United States)

    Liyanagedera, Chamika M.; Sengupta, Abhronil; Jaiswal, Akhilesh; Roy, Kaushik

    2017-12-01

    Stochastic spiking neural networks based on nanoelectronic spin devices can be a possible pathway to achieving "brainlike" compact and energy-efficient cognitive intelligence. The computational model attempt to exploit the intrinsic device stochasticity of nanoelectronic synaptic or neural components to perform learning or inference. However, there has been limited analysis on the scaling effect of stochastic spin devices and its impact on the operation of such stochastic networks at the system level. This work attempts to explore the design space and analyze the performance of nanomagnet-based stochastic neuromorphic computing architectures for magnets with different barrier heights. We illustrate how the underlying network architecture must be modified to account for the random telegraphic switching behavior displayed by magnets with low barrier heights as they are scaled into the superparamagnetic regime. We perform a device-to-system-level analysis on a deep neural-network architecture for a digit-recognition problem on the MNIST data set.

  3. Spike morphology in blast-wave-driven instability experiments

    International Nuclear Information System (INIS)

    Kuranz, C. C.; Drake, R. P.; Grosskopf, M. J.; Fryxell, B.; Budde, A.; Hansen, J. F.; Miles, A. R.; Plewa, T.; Hearn, N.; Knauer, J.

    2010-01-01

    The laboratory experiments described in the present paper observe the blast-wave-driven Rayleigh-Taylor instability with three-dimensional (3D) initial conditions. About 5 kJ of energy from the Omega laser creates conditions similar to those of the He-H interface during the explosion phase of a supernova. The experimental target is a 150 μm thick plastic disk followed by a low-density foam. The plastic piece has an embedded, 3D perturbation. The basic structure of the pattern is two orthogonal sine waves where each sine wave has an amplitude of 2.5 μm and a wavelength of 71 μm. In some experiments, an additional wavelength is added to explore the interaction of modes. In experiments with 3D initial conditions the spike morphology differs from what has been observed in other Rayleigh-Taylor experiments and simulations. Under certain conditions, experimental radiographs show some mass extending from the interface to the shock front. Current simulations show neither the spike morphology nor the spike penetration observed in the experiments. The amount of mass reaching the shock front is analyzed and potential causes for the spike morphology and the spikes reaching the shock are discussed. One such hypothesis is that these phenomena may be caused by magnetic pressure, generated by an azimuthal magnetic field produced by the plasma dynamics.

  4. Anticipating Activity in Social Media Spikes

    OpenAIRE

    Higham, Desmond J.; Grindrod, Peter; Mantzaris, Alexander V.; Otley, Amanda; Laflin, Peter

    2014-01-01

    We propose a novel mathematical model for the activity of microbloggers during an external, event-driven spike. The model leads to a testable prediction of who would become most active if a spike were to take place. This type of information is of great interest to commercial organisations, governments and charities, as it identifies key players who can be targeted with information in real time when the network is most receptive. The model takes account of the fact that dynamic interactions ev...

  5. Effects of spike-time-dependent plasticity on the stochastic resonance of small-world neuronal networks

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Haitao; Guo, Xinmeng; Wang, Jiang, E-mail: jiangwang@tju.edu.cn; Deng, Bin; Wei, Xile [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2014-09-01

    The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.

  6. Effects of spike-time-dependent plasticity on the stochastic resonance of small-world neuronal networks

    International Nuclear Information System (INIS)

    Yu, Haitao; Guo, Xinmeng; Wang, Jiang; Deng, Bin; Wei, Xile

    2014-01-01

    The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks

  7. Fuzzy Logic Based Anomaly Detection for Embedded Network Security Cyber Sensor

    Energy Technology Data Exchange (ETDEWEB)

    Ondrej Linda; Todd Vollmer; Jason Wright; Milos Manic

    2011-04-01

    Resiliency and security in critical infrastructure control systems in the modern world of cyber terrorism constitute a relevant concern. Developing a network security system specifically tailored to the requirements of such critical assets is of a primary importance. This paper proposes a novel learning algorithm for anomaly based network security cyber sensor together with its hardware implementation. The presented learning algorithm constructs a fuzzy logic rule based model of normal network behavior. Individual fuzzy rules are extracted directly from the stream of incoming packets using an online clustering algorithm. This learning algorithm was specifically developed to comply with the constrained computational requirements of low-cost embedded network security cyber sensors. The performance of the system was evaluated on a set of network data recorded from an experimental test-bed mimicking the environment of a critical infrastructure control system.

  8. CLAM - CoLlAborative eMbedded networks for submarine surveillance: An overview

    NARCIS (Netherlands)

    Meratnia, Nirvana; Havinga, Paul J.M.; Casari, Paolo; Petrioli, Chiara; Grythe, Knut; Husoy, Thor; Zorzi, Michele

    2011-01-01

    This paper provides an overview of the CLAM project, which aims at developing a collaborative embedded monitoring and control platform for submarine surveillance by combining cutting edge acoustic vector sensor technology and 1D, 2D, 3D sensor arrays, underwater wireless sensor networks protocol

  9. The design of nuclear radiation measuring instrument of embedded network

    International Nuclear Information System (INIS)

    Zhang Huaiqiang; Ge Liangquan; Xiong Shengqing

    2009-01-01

    The design and realization of nuclear radiation measuring instrument is introduced. Due to the current nuclear instrument often used serial interface to communicate the PC, it is widely used for simple design and easy operation. However, as the demand of remote data acquisition and the call of sharing resources, the design of embedded the TCP/IP protocol stack into MCU, it may send the nuclear signal in Internet. Some devices that link each other with the network can be networked. The design is not only realizing remote data acquisition and sharing resources, but also reducing costs and improving the maintainability of the system. (authors)

  10. Fitting neuron models to spike trains

    Directory of Open Access Journals (Sweden)

    Cyrille eRossant

    2011-02-01

    Full Text Available Computational modeling is increasingly used to understand the function of neural circuitsin systems neuroscience.These studies require models of individual neurons with realisticinput-output properties.Recently, it was found that spiking models can accurately predict theprecisely timed spike trains produced by cortical neurons in response tosomatically injected currents,if properly fitted. This requires fitting techniques that are efficientand flexible enough to easily test different candidate models.We present a generic solution, based on the Brian simulator(a neural network simulator in Python, which allowsthe user to define and fit arbitrary neuron models to electrophysiological recordings.It relies on vectorization and parallel computing techniques toachieve efficiency.We demonstrate its use on neural recordings in the barrel cortex andin the auditory brainstem, and confirm that simple adaptive spiking modelscan accurately predict the response of cortical neurons. Finally, we show how a complexmulticompartmental model can be reduced to a simple effective spiking model.

  11. Enabling Dynamic Security Management of Networked Systems via Device-Embedded Security (Self-Securing Devices)

    National Research Council Canada - National Science Library

    Ganger, Gregory R

    2007-01-01

    This report summarizes the results of the work on the AFOSR's Critical Infrastructure Protection Program project, entitled Enabling Dynamic Security Management of Networked Systems via Device-Embedded Security...

  12. A Simple Deep Learning Method for Neuronal Spike Sorting

    Science.gov (United States)

    Yang, Kai; Wu, Haifeng; Zeng, Yu

    2017-10-01

    Spike sorting is one of key technique to understand brain activity. With the development of modern electrophysiology technology, some recent multi-electrode technologies have been able to record the activity of thousands of neuronal spikes simultaneously. The spike sorting in this case will increase the computational complexity of conventional sorting algorithms. In this paper, we will focus spike sorting on how to reduce the complexity, and introduce a deep learning algorithm, principal component analysis network (PCANet) to spike sorting. The introduced method starts from a conventional model and establish a Toeplitz matrix. Through the column vectors in the matrix, we trains a PCANet, where some eigenvalue vectors of spikes could be extracted. Finally, support vector machine (SVM) is used to sort spikes. In experiments, we choose two groups of simulated data from public databases availably and compare this introduced method with conventional methods. The results indicate that the introduced method indeed has lower complexity with the same sorting errors as the conventional methods.

  13. Quantum gravity vacuum and invariants of embedded spin networks

    International Nuclear Information System (INIS)

    Mikovic, A

    2003-01-01

    We show that the path integral for the three-dimensional SU(2) BF theory with a Wilson loop or a spin network function inserted can be understood as the Rovelli-Smolin loop transform of a wavefunction in the Ashtekar connection representation, where the wavefunction satisfies the constraints of quantum general relativity with zero cosmological constant. This wavefunction is given as a product of the delta functions of the SU(2) field strength and therefore it can be naturally associated with a flat connection spacetime. The loop transform can be defined rigorously via the quantum SU(2) group, as a spin foam state sum model, so that one obtains invariants of spin networks embedded in a three-manifold. These invariants define a flat connection vacuum state in the q-deformed spin network basis. We then propose a modification of this construction in order to obtain a vacuum state corresponding to the flat metric spacetime

  14. Statistical properties of superimposed stationary spike trains.

    Science.gov (United States)

    Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan

    2012-06-01

    The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.

  15. Model-based analysis and control of a network of basal ganglia spiking neurons in the normal and Parkinsonian states

    Science.gov (United States)

    Liu, Jianbo; Khalil, Hassan K.; Oweiss, Karim G.

    2011-08-01

    Controlling the spatiotemporal firing pattern of an intricately connected network of neurons through microstimulation is highly desirable in many applications. We investigated in this paper the feasibility of using a model-based approach to the analysis and control of a basal ganglia (BG) network model of Hodgkin-Huxley (HH) spiking neurons through microstimulation. Detailed analysis of this network model suggests that it can reproduce the experimentally observed characteristics of BG neurons under a normal and a pathological Parkinsonian state. A simplified neuronal firing rate model, identified from the detailed HH network model, is shown to capture the essential network dynamics. Mathematical analysis of the simplified model reveals the presence of a systematic relationship between the network's structure and its dynamic response to spatiotemporally patterned microstimulation. We show that both the network synaptic organization and the local mechanism of microstimulation can impose tight constraints on the possible spatiotemporal firing patterns that can be generated by the microstimulated network, which may hinder the effectiveness of microstimulation to achieve a desired objective under certain conditions. Finally, we demonstrate that the feedback control design aided by the mathematical analysis of the simplified model is indeed effective in driving the BG network in the normal and Parskinsonian states to follow a prescribed spatiotemporal firing pattern. We further show that the rhythmic/oscillatory patterns that characterize a dopamine-depleted BG network can be suppressed as a direct consequence of controlling the spatiotemporal pattern of a subpopulation of the output Globus Pallidus internalis (GPi) neurons in the network. This work may provide plausible explanations for the mechanisms underlying the therapeutic effects of deep brain stimulation (DBS) in Parkinson's disease and pave the way towards a model-based, network level analysis and closed

  16. Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.

    Science.gov (United States)

    Merolla, Paul A; Arthur, John V; Alvarez-Icaza, Rodrigo; Cassidy, Andrew S; Sawada, Jun; Akopyan, Filipp; Jackson, Bryan L; Imam, Nabil; Guo, Chen; Nakamura, Yutaka; Brezzo, Bernard; Vo, Ivan; Esser, Steven K; Appuswamy, Rathinakumar; Taba, Brian; Amir, Arnon; Flickner, Myron D; Risk, William P; Manohar, Rajit; Modha, Dharmendra S

    2014-08-08

    Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts. Copyright © 2014, American Association for the Advancement of Science.

  17. Acceleration of spiking neural network based pattern recognition on NVIDIA graphics processors.

    Science.gov (United States)

    Han, Bing; Taha, Tarek M

    2010-04-01

    There is currently a strong push in the research community to develop biological scale implementations of neuron based vision models. Systems at this scale are computationally demanding and generally utilize more accurate neuron models, such as the Izhikevich and the Hodgkin-Huxley models, in favor of the more popular integrate and fire model. We examine the feasibility of using graphics processing units (GPUs) to accelerate a spiking neural network based character recognition network to enable such large scale systems. Two versions of the network utilizing the Izhikevich and Hodgkin-Huxley models are implemented. Three NVIDIA general-purpose (GP) GPU platforms are examined, including the GeForce 9800 GX2, the Tesla C1060, and the Tesla S1070. Our results show that the GPGPUs can provide significant speedup over conventional processors. In particular, the fastest GPGPU utilized, the Tesla S1070, provided a speedup of 5.6 and 84.4 over highly optimized implementations on the fastest central processing unit (CPU) tested, a quadcore 2.67 GHz Xeon processor, for the Izhikevich and the Hodgkin-Huxley models, respectively. The CPU implementation utilized all four cores and the vector data parallelism offered by the processor. The results indicate that GPUs are well suited for this application domain.

  18. Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.

    Science.gov (United States)

    Xiang, Lei; Wang, Qian; Nie, Dong; Zhang, Lichi; Jin, Xiyao; Qiao, Yu; Shen, Dinggang

    2018-07-01

    Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image. Copyright © 2018. Published by Elsevier B.V.

  19. Efficient spiking neural network model of pattern motion selectivity in visual cortex.

    Science.gov (United States)

    Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L

    2014-07-01

    Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.

  20. Network evolution induced by asynchronous stimuli through spike-timing-dependent plasticity.

    Directory of Open Access Journals (Sweden)

    Wu-Jie Yuan

    Full Text Available In sensory neural system, external asynchronous stimuli play an important role in perceptual learning, associative memory and map development. However, the organization of structure and dynamics of neural networks induced by external asynchronous stimuli are not well understood. Spike-timing-dependent plasticity (STDP is a typical synaptic plasticity that has been extensively found in the sensory systems and that has received much theoretical attention. This synaptic plasticity is highly sensitive to correlations between pre- and postsynaptic firings. Thus, STDP is expected to play an important role in response to external asynchronous stimuli, which can induce segregative pre- and postsynaptic firings. In this paper, we study the impact of external asynchronous stimuli on the organization of structure and dynamics of neural networks through STDP. We construct a two-dimensional spatial neural network model with local connectivity and sparseness, and use external currents to stimulate alternately on different spatial layers. The adopted external currents imposed alternately on spatial layers can be here regarded as external asynchronous stimuli. Through extensive numerical simulations, we focus on the effects of stimulus number and inter-stimulus timing on synaptic connecting weights and the property of propagation dynamics in the resulting network structure. Interestingly, the resulting feedforward structure induced by stimulus-dependent asynchronous firings and its propagation dynamics reflect both the underlying property of STDP. The results imply a possible important role of STDP in generating feedforward structure and collective propagation activity required for experience-dependent map plasticity in developing in vivo sensory pathways and cortices. The relevance of the results to cue-triggered recall of learned temporal sequences, an important cognitive function, is briefly discussed as well. Furthermore, this finding suggests a potential

  1. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.

    Science.gov (United States)

    Burbank, Kendra S

    2015-12-01

    The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.

  2. Bumps, breathers, and waves in a neural network with spike frequency adaptation

    International Nuclear Information System (INIS)

    Coombes, S.; Owen, M.R.

    2005-01-01

    We introduce a continuum model of neural tissue that includes the effects of spike frequency adaptation (SFA). The basic model is an integral equation for synaptic activity that depends upon nonlocal network connectivity, synaptic response, and the firing rate of a single neuron. We consider a phenomenological model of SFA via a simple state-dependent threshold firing rate function. As without SFA, Mexican-hat connectivity allows for the existence of spatially localized states (bumps). Importantly recent Evans function techniques are used to show that bumps may destabilize leading to the emergence of breathers and traveling waves. Moreover, a similar analysis for traveling pulses leads to the conditions necessary to observe a stable traveling breather. Simulations confirm our theoretical predictions and illustrate the rich behavior of this model

  3. SPAN: spike pattern association neuron for learning spatio-temporal sequences

    OpenAIRE

    Mohemmed, A; Schliebs, S; Matsuda, S; Kasabov, N

    2012-01-01

    Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN — a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the prec...

  4. Mean Green operators of deformable fiber networks embedded in a compliant matrix and property estimates

    Science.gov (United States)

    Franciosi, Patrick; Spagnuolo, Mario; Salman, Oguz Umut

    2018-04-01

    Composites comprising included phases in a continuous matrix constitute a huge class of meta-materials, whose effective properties, whether they be mechanical, physical or coupled, can be selectively optimized by using appropriate phase arrangements and architectures. An important subclass is represented by "network-reinforced matrices," say those materials in which one or more of the embedded phases are co-continuous with the matrix in one or more directions. In this article, we present a method to study effective properties of simple such structures from which more complex ones can be accessible. Effective properties are shown, in the framework of linear elasticity, estimable by using the global mean Green operator for the entire embedded fiber network which is by definition through sample spanning. This network operator is obtained from one of infinite planar alignments of infinite fibers, which the network can be seen as an interpenetrated set of, with the fiber interactions being fully accounted for in the alignments. The mean operator of such alignments is given in exact closed form for isotropic elastic-like or dielectric-like matrices. We first exemplify how these operators relevantly provide, from classic homogenization frameworks, effective properties in the case of 1D fiber bundles embedded in an isotropic elastic-like medium. It is also shown that using infinite patterns with fully interacting elements over their whole influence range at any element concentration suppresses the dilute approximation limit of these frameworks. We finally present a construction method for a global operator of fiber networks described as interpenetrated such bundles.

  5. Chromatin accessibility prediction via convolutional long short-term memory networks with k-mer embedding.

    Science.gov (United States)

    Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui

    2017-07-15

    Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k -mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k -mer co-occurrence information with recent advances in deep learning. We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k -mer embedding. We first split DNA sequences into k -mers and pre-train k -mer embedding vectors based on the co-occurrence matrix of k -mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k -mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm . tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn. Supplementary materials are available at

  6. Interplay of intrinsic and synaptic conductances in the generation of high-frequency oscillations in interneuronal networks with irregular spiking.

    Directory of Open Access Journals (Sweden)

    Fabiano Baroni

    2014-05-01

    Full Text Available High-frequency oscillations (above 30 Hz have been observed in sensory and higher-order brain areas, and are believed to constitute a general hallmark of functional neuronal activation. Fast inhibition in interneuronal networks has been suggested as a general mechanism for the generation of high-frequency oscillations. Certain classes of interneurons exhibit subthreshold oscillations, but the effect of this intrinsic neuronal property on the population rhythm is not completely understood. We study the influence of intrinsic damped subthreshold oscillations in the emergence of collective high-frequency oscillations, and elucidate the dynamical mechanisms that underlie this phenomenon. We simulate neuronal networks composed of either Integrate-and-Fire (IF or Generalized Integrate-and-Fire (GIF neurons. The IF model displays purely passive subthreshold dynamics, while the GIF model exhibits subthreshold damped oscillations. Individual neurons receive inhibitory synaptic currents mediated by spiking activity in their neighbors as well as noisy synaptic bombardment, and fire irregularly at a lower rate than population frequency. We identify three factors that affect the influence of single-neuron properties on synchronization mediated by inhibition: i the firing rate response to the noisy background input, ii the membrane potential distribution, and iii the shape of Inhibitory Post-Synaptic Potentials (IPSPs. For hyperpolarizing inhibition, the GIF IPSP profile (factor iii exhibits post-inhibitory rebound, which induces a coherent spike-mediated depolarization across cells that greatly facilitates synchronous oscillations. This effect dominates the network dynamics, hence GIF networks display stronger oscillations than IF networks. However, the restorative current in the GIF neuron lowers firing rates and narrows the membrane potential distribution (factors i and ii, respectively, which tend to decrease synchrony. If inhibition is shunting instead

  7. Interplay of intrinsic and synaptic conductances in the generation of high-frequency oscillations in interneuronal networks with irregular spiking.

    Science.gov (United States)

    Baroni, Fabiano; Burkitt, Anthony N; Grayden, David B

    2014-05-01

    High-frequency oscillations (above 30 Hz) have been observed in sensory and higher-order brain areas, and are believed to constitute a general hallmark of functional neuronal activation. Fast inhibition in interneuronal networks has been suggested as a general mechanism for the generation of high-frequency oscillations. Certain classes of interneurons exhibit subthreshold oscillations, but the effect of this intrinsic neuronal property on the population rhythm is not completely understood. We study the influence of intrinsic damped subthreshold oscillations in the emergence of collective high-frequency oscillations, and elucidate the dynamical mechanisms that underlie this phenomenon. We simulate neuronal networks composed of either Integrate-and-Fire (IF) or Generalized Integrate-and-Fire (GIF) neurons. The IF model displays purely passive subthreshold dynamics, while the GIF model exhibits subthreshold damped oscillations. Individual neurons receive inhibitory synaptic currents mediated by spiking activity in their neighbors as well as noisy synaptic bombardment, and fire irregularly at a lower rate than population frequency. We identify three factors that affect the influence of single-neuron properties on synchronization mediated by inhibition: i) the firing rate response to the noisy background input, ii) the membrane potential distribution, and iii) the shape of Inhibitory Post-Synaptic Potentials (IPSPs). For hyperpolarizing inhibition, the GIF IPSP profile (factor iii)) exhibits post-inhibitory rebound, which induces a coherent spike-mediated depolarization across cells that greatly facilitates synchronous oscillations. This effect dominates the network dynamics, hence GIF networks display stronger oscillations than IF networks. However, the restorative current in the GIF neuron lowers firing rates and narrows the membrane potential distribution (factors i) and ii), respectively), which tend to decrease synchrony. If inhibition is shunting instead of

  8. Boobs, Boxing, and Bombs: Problematizing the Entertainment of Spike TV

    OpenAIRE

    Walton, Gerald; Potvin, L.

    2009-01-01

    Spike is the only television network in North America “for men.” Its motto, “Get more action,” is suggestive of pursuits of various forms of violence. We conceptualize Spike not as trivial entertainment, but rather as a form of pop culture that erodes the gains of feminists who have challenged the prevalence of normalized hegemonic masculinity (HM). Our paper highlights themes of Spike content, and connects those themes to the literature on HM. Moreover, we validate the identities and lives ...

  9. Real-time computing platform for spiking neurons (RT-spike).

    Science.gov (United States)

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  10. Correlations decrease with propagation of spiking activity in the mouse barrel cortex

    Directory of Open Access Journals (Sweden)

    Gayathri Nattar Ranganathan

    2011-05-01

    Full Text Available Propagation of suprathreshold spiking activity through neuronal populations is important for the function of the central nervous system. Neural correlations have an impact on cortical function particularly on the signaling of information and propagation of spiking activity. Therefore we measured the change in correlations as suprathreshold spiking activity propagated between recurrent neuronal networks of the mammalian cerebral cortex. Using optical methods we recorded spiking activity from large samples of neurons from two neural populations simultaneously. The results indicate that correlations decreased as spiking activity propagated from layer 4 to layer 2/3 in the rodent barrel cortex.

  11. An Investigation on the Role of Spike Latency in an Artificial Olfactory System

    Directory of Open Access Journals (Sweden)

    Corrado eDi Natale

    2011-12-01

    Full Text Available Experimental studies have shown that the reactions to external stimuli may appear only few hundreds of milliseconds after the physical interaction of the stimulus with the proper receptor. This behavior suggests that neurons transmit the largest meaningful part of their signal in the first spikes, and than that the spike latency is a good descriptor of the information content in biological neural networks. In this paper this property has been investigated in an artificial sensorial system where a single layer of spiking neurons is trained with the data generated by an artificial olfactory platform based on a large array of chemical sensors. The capability to discriminate between distinct chemicals and mixtures of them was studied with spiking neural networks endowed with and without lateral inhibitions and considering as output feature of the network both the spikes latency and the average firing rate. Results show that the average firing rate of the output spikes sequences shows the best separation among the experienced vapors, however the latency code is able in a shorter time to correctly discriminate all the tested volatile compounds. This behavior is qualitatively similar to those recently found in natural olfaction, and noteworthy it provides practical suggestions to tail the measurement conditions of artificial olfactory systems defining for each specific case a proper measurement time.

  12. Trusted computing for embedded systems

    CERN Document Server

    Soudris, Dimitrios; Anagnostopoulos, Iraklis

    2015-01-01

    This book describes the state-of-the-art in trusted computing for embedded systems. It shows how a variety of security and trusted computing problems are addressed currently and what solutions are expected to emerge in the coming years. The discussion focuses on attacks aimed at hardware and software for embedded systems, and the authors describe specific solutions to create security features. Case studies are used to present new techniques designed as industrial security solutions. Coverage includes development of tamper resistant hardware and firmware mechanisms for lightweight embedded devices, as well as those serving as security anchors for embedded platforms required by applications such as smart power grids, smart networked and home appliances, environmental and infrastructure sensor networks, etc. ·         Enables readers to address a variety of security threats to embedded hardware and software; ·         Describes design of secure wireless sensor networks, to address secure authen...

  13. Identifying Typhoon Tracks based on Event Synchronization derived Spatially Embedded Climate Networks

    Science.gov (United States)

    Ozturk, Ugur; Marwan, Norbert; Kurths, Jürgen

    2017-04-01

    Complex networks are commonly used for investigating spatiotemporal dynamics of complex systems, e.g. extreme rainfall. Especially directed networks are very effective tools in identifying climatic patterns on spatially embedded networks. They can capture the network flux, so as the principal dynamics of spreading significant phenomena. Network measures, such as network divergence, bare the source-receptor relation of the directed networks. However, it is still a challenge how to catch fast evolving atmospheric events, i.e. typhoons. In this study, we propose a new technique, namely Radial Ranks, to detect the general pattern of typhoons forward direction based on the strength parameter of the event synchronization over Japan. We suggest to subset a circular zone of high correlation around the selected grid based on the strength parameter. Radial sums of the strength parameter along vectors within this zone, radial ranks are measured for potential directions, which allows us to trace the network flux over long distances. We employed also the delay parameter of event synchronization to identify and separate the frontal storms' and typhoons' individual behaviors.

  14. Stress-Induced Impairment of a Working Memory Task: Role of Spiking Rate and Spiking History Predicted Discharge

    Science.gov (United States)

    Devilbiss, David M.; Jenison, Rick L.; Berridge, Craig W.

    2012-01-01

    Stress, pervasive in society, contributes to over half of all work place accidents a year and over time can contribute to a variety of psychiatric disorders including depression, schizophrenia, and post-traumatic stress disorder. Stress impairs higher cognitive processes, dependent on the prefrontal cortex (PFC) and that involve maintenance and integration of information over extended periods, including working memory and attention. Substantial evidence has demonstrated a relationship between patterns of PFC neuron spiking activity (action-potential discharge) and components of delayed-response tasks used to probe PFC-dependent cognitive function in rats and monkeys. During delay periods of these tasks, persistent spiking activity is posited to be essential for the maintenance of information for working memory and attention. However, the degree to which stress-induced impairment in PFC-dependent cognition involves changes in task-related spiking rates or the ability for PFC neurons to retain information over time remains unknown. In the current study, spiking activity was recorded from the medial PFC of rats performing a delayed-response task of working memory during acute noise stress (93 db). Spike history-predicted discharge (SHPD) for PFC neurons was quantified as a measure of the degree to which ongoing neuronal discharge can be predicted by past spiking activity and reflects the degree to which past information is retained by these neurons over time. We found that PFC neuron discharge is predicted by their past spiking patterns for nearly one second. Acute stress impaired SHPD, selectively during delay intervals of the task, and simultaneously impaired task performance. Despite the reduction in delay-related SHPD, stress increased delay-related spiking rates. These findings suggest that neural codes utilizing SHPD within PFC networks likely reflects an additional important neurophysiological mechanism for maintenance of past information over time. Stress

  15. Robust working memory in an asynchronously spiking neural network realized in neuromorphic VLSI

    Directory of Open Access Journals (Sweden)

    Massimiliano eGiulioni

    2012-02-01

    Full Text Available We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory of integrate-and-fire (LIF neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of ‘high’ and ‘low’-firing activity. Depending on the overall excitability, transitions to the ‘high’ state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the ‘high’ state retains a working memory of a stimulus until well after its release. In the latter case, ‘high’ states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated corrupted ‘high’ states comprising neurons of both excitatory populations. Within a basin of attraction, the network dynamics corrects such states and re-establishes the prototypical ‘high’ state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  16. Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI.

    Science.gov (United States)

    Giulioni, Massimiliano; Camilleri, Patrick; Mattia, Maurizio; Dante, Vittorio; Braun, Jochen; Del Giudice, Paolo

    2011-01-01

    We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of "high" and "low"-firing activity. Depending on the overall excitability, transitions to the "high" state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the "high" state retains a "working memory" of a stimulus until well after its release. In the latter case, "high" states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated "corrupted" "high" states comprising neurons of both excitatory populations. Within a "basin of attraction," the network dynamics "corrects" such states and re-establishes the prototypical "high" state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  17. The influence of single bursts vs. single spikes at excitatory dendrodendritic synapses

    Science.gov (United States)

    Masurkar, Arjun V.; Chen, Wei R.

    2015-01-01

    The synchronization of neuronal activity is thought to enhance information processing. There is much evidence supporting rhythmically bursting external tufted cells (ETCs) of the rodent olfactory bulb glomeruli coordinating the activation of glomerular interneurons and mitral cells via dendrodendritic excitation. However, as bursting has variable significance at axodendritic cortical synapses, it is not clear if ETC bursting imparts a specific functional advantage over the preliminary spike in dendrodendritic synaptic networks. To answer this question, we investigated the influence of single ETC bursts and spikes with the in-vitro rat olfactory bulb preparation at different levels of processing, via calcium imaging of presynaptic ETC dendrites, dual electrical recording of ETC–interneuron synaptic pairs, and multicellular calcium imaging of ETC-induced population activity. Our findings supported single ETC bursts, vs. single spikes, driving robust presynaptic calcium signaling, which in turn was associated with profound extension of the initial monosynaptic spike-driven dendrodendritic excitatory postsynaptic potential. This extension could be driven by either the spike-dependent or spike-independent components of the burst. At the population level, burst-induced excitation was more widespread and reliable compared with single spikes. This further supports the ETC network, in part due to a functional advantage of bursting at excitatory dendrodendritic synapses, coordinating synchronous activity at behaviorally relevant frequencies related to odor processing in vivo. PMID:22277089

  18. Comparison of Classifier Architectures for Online Neural Spike Sorting.

    Science.gov (United States)

    Saeed, Maryam; Khan, Amir Ali; Kamboh, Awais Mehmood

    2017-04-01

    High-density, intracranial recordings from micro-electrode arrays need to undergo Spike Sorting in order to associate the recorded neuronal spikes to particular neurons. This involves spike detection, feature extraction, and classification. To reduce the data transmission and power requirements, on-chip real-time processing is becoming very popular. However, high computational resources are required for classifiers in on-chip spike-sorters, making scalability a great challenge. In this review paper, we analyze several popular classifiers to propose five new hardware architectures using the off-chip training with on-chip classification approach. These include support vector classification, fuzzy C-means classification, self-organizing maps classification, moving-centroid K-means classification, and Cosine distance classification. The performance of these architectures is analyzed in terms of accuracy and resource requirement. We establish that the neural networks based Self-Organizing Maps classifier offers the most viable solution. A spike sorter based on the Self-Organizing Maps classifier, requires only 7.83% of computational resources of the best-reported spike sorter, hierarchical adaptive means, while offering a 3% better accuracy at 7 dB SNR.

  19. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  20. A review on the impact of embedded generation to network fault level

    Science.gov (United States)

    Yahaya, M. S.; Basar, M. F.; Ibrahim, Z.; Nasir, M. N. N.; Lada, M. Y.; Bukhari, W. M.

    2015-05-01

    The line of Embedded Generation (EG) in power systems especially for renewable energy has increased markedly in recent years. The interconnection of EG has a technical impact which needs to considered. One of the technical challenges faced by the Distribution Network Operator (DNO) is the network fault level. In this paper, the different methods of interconnection with and without EG on the network is analyze by looking at the impact of network fault level. This comparative study made to determine the most effective method to reduce fault level or fault current. This paper will gives basic understanding on the fault level effect when synchronous generator connected to network by different method of interconnection. A three phase fault is introduced at one network bus bar. By employ it to simple network configuration of network configurations which is normal interconnection and splitting network connection with and without EG, the fault level has been simulated and analyzed. Developing the network model by using PSS-Viper™ software package, the fault level for both networks will be showed and the difference is defines. From the review, network splitting was found the best interconnection method and greatest potential for reducing the fault level in the network.

  1. A spiking neuron circuit based on a carbon nanotube transistor

    International Nuclear Information System (INIS)

    Chen, C-L; Kim, K; Truong, Q; Shen, A; Li, Z; Chen, Y

    2012-01-01

    A spiking neuron circuit based on a carbon nanotube (CNT) transistor is presented in this paper. The spiking neuron circuit has a crossbar architecture in which the transistor gates are connected to its row electrodes and the transistor sources are connected to its column electrodes. An electrochemical cell is incorporated in the gate of the transistor by sandwiching a hydrogen-doped poly(ethylene glycol)methyl ether (PEG) electrolyte between the CNT channel and the top gate electrode. An input spike applied to the gate triggers a dynamic drift of the hydrogen ions in the PEG electrolyte, resulting in a post-synaptic current (PSC) through the CNT channel. Spikes input into the rows trigger PSCs through multiple CNT transistors, and PSCs cumulate in the columns and integrate into a ‘soma’ circuit to trigger output spikes based on an integrate-and-fire mechanism. The spiking neuron circuit can potentially emulate biological neuron networks and their intelligent functions. (paper)

  2. A defined network of fast-spiking interneurons in orbitofrontal cortex: responses to behavioral contingencies and ketamine administration

    Directory of Open Access Journals (Sweden)

    Michael C Quirk

    2009-11-01

    Full Text Available Orbitofrontal cortex (OFC is a region of prefrontal cortex implicated in the motivational control of behavior and in related abnormalities seen in psychosis and depression. It has been hypothesized that a critical mechanism in these disorders is the dysfunction of GABAergic interneurons that normally regulate prefrontal information processing. Here, we studied a subclass of interneurons isolated in rat OFC using extracellular waveform and spike train analysis. During performance of a goal-directed behavioral task, the firing of this class of putative fast-spiking (FS interneurons showed robust temporal correlations indicative of a functionally coherent network. FS cell activity also co-varied with behavioral response latency, a key indicator of motivational state. Systemic administration of ketamine, a drug that can mimic psychosis, preferentially inhibited this cell class. Together, these results support the idea that OFC-FS interneurons form a critical link in the regulation of motivation by prefrontal circuits during normal and abnormal brain and behavioral states.

  3. Spike timing precision of neuronal circuits.

    Science.gov (United States)

    Kilinc, Deniz; Demir, Alper

    2018-04-17

    Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.

  4. Neuronal Networks in Children with Continuous Spikes and Waves during Slow Sleep

    Science.gov (United States)

    Siniatchkin, Michael; Groening, Kristina; Moehring, Jan; Moeller, Friederike; Boor, Rainer; Brodbeck, Verena; Michel, Christoph M.; Rodionov, Roman; Lemieux, Louis; Stephani, Ulrich

    2010-01-01

    Epileptic encephalopathy with continuous spikes and waves during slow sleep is an age-related disorder characterized by the presence of interictal epileptiform discharges during at least greater than 85% of sleep and cognitive deficits associated with this electroencephalography pattern. The pathophysiological mechanisms of continuous spikes and…

  5. Technical solutions to enable embedded generation growth

    Energy Technology Data Exchange (ETDEWEB)

    Lynch, C.A.; Todd, S.; Millar, W.; Wood, H.S.

    2003-07-01

    This report describes the results of one of a series of studies commissioned by the UK Department of Trade and Industry into various aspects of embedded generation with the aim of supporting the development and deployment of electrical sources (particularly their ease of connection to the network) to deliver power to consumers. The first phase of the project involved a literature review and meetings with embedded generation developers and planning engineers from distribution network operators (DNOs). The second phase investigated embedded generation at different levels of the distribution network and included modelling a representative network. Technologies that could facilitate a significant increase in embedded generation were identified and estimates made of when and where significant development would be needed. Technical problems identified by DNOs were concerned with thermal loading, voltage regulation, fault levels, protection and network operation. A number of non-technical (commercial and regulatory) problems were also identified. The report describes the UK regulatory framework, the present situation, the British power system, the accommodation of embedded generation by established means, the representative model and technical innovations.

  6. Spike frequency adaptation is a possible mechanism for control of attractor preference in auto-associative neural networks

    Science.gov (United States)

    Roach, James; Sander, Leonard; Zochowski, Michal

    Auto-associative memory is the ability to retrieve a pattern from a small fraction of the pattern and is an important function of neural networks. Within this context, memories that are stored within the synaptic strengths of networks act as dynamical attractors for network firing patterns. In networks with many encoded memories, some attractors will be stronger than others. This presents the problem of how networks switch between attractors depending on the situation. We suggest that regulation of neuronal spike-frequency adaptation (SFA) provides a universal mechanism for network-wide attractor selectivity. Here we demonstrate in a Hopfield type attractor network that neurons minimal SFA will reliably activate in the pattern corresponding to a local attractor and that a moderate increase in SFA leads to the network to converge to the strongest attractor state. Furthermore, we show that on long time scales SFA allows for temporal sequences of activation to emerge. Finally, using a model of cholinergic modulation within the cortex we argue that dynamic regulation of attractor preference by SFA could be critical for the role of acetylcholine in attention or for arousal states in general. This work was supported by: NSF Graduate Research Fellowship Program under Grant No. DGE 1256260 (JPR), NSF CMMI 1029388 (MRZ) and NSF PoLS 1058034 (MRZ & LMS).

  7. Spike-Timing Dependent Plasticity in Unipolar Silicon Oxide RRAM Devices.

    Science.gov (United States)

    Zarudnyi, Konstantin; Mehonic, Adnan; Montesi, Luca; Buckwell, Mark; Hudziak, Stephen; Kenyon, Anthony J

    2018-01-01

    Resistance switching, or Resistive RAM (RRAM) devices show considerable potential for application in hardware spiking neural networks (neuro-inspired computing) by mimicking some of the behavior of biological synapses, and hence enabling non-von Neumann computer architectures. Spike-timing dependent plasticity (STDP) is one such behavior, and one example of several classes of plasticity that are being examined with the aim of finding suitable algorithms for application in many computing tasks such as coincidence detection, classification and image recognition. In previous work we have demonstrated that the neuromorphic capabilities of silicon-rich silicon oxide (SiO x ) resistance switching devices extend beyond plasticity to include thresholding, spiking, and integration. We previously demonstrated such behaviors in devices operated in the unipolar mode, opening up the question of whether we could add plasticity to the list of features exhibited by our devices. Here we demonstrate clear STDP in unipolar devices. Significantly, we show that the response of our devices is broadly similar to that of biological synapses. This work further reinforces the potential of simple two-terminal RRAM devices to mimic neuronal functionality in hardware spiking neural networks.

  8. Learning of spiking networks with different forms of long-term synaptic plasticity

    International Nuclear Information System (INIS)

    Vlasov, D.S.; Sboev, A.G.; Serenko, A.V.; Rybka, R.B.; Moloshnikov, I.A.

    2016-01-01

    The possibility of modeling the learning process based on different forms of spike timing-dependent plasticity (STDP) has been studied. It has been shown that the learnability depends on the choice of the spike pairing scheme in the STDP rule and the type of the input signal used during learning [ru

  9. Evoking prescribed spike times in stochastic neurons

    Science.gov (United States)

    Doose, Jens; Lindner, Benjamin

    2017-09-01

    Single cell stimulation in vivo is a powerful tool to investigate the properties of single neurons and their functionality in neural networks. We present a method to determine a cell-specific stimulus that reliably evokes a prescribed spike train with high temporal precision of action potentials. We test the performance of this stimulus in simulations for two different stochastic neuron models. For a broad range of parameters and a neuron firing with intermediate firing rates (20-40 Hz) the reliability in evoking the prescribed spike train is close to its theoretical maximum that is mainly determined by the level of intrinsic noise.

  10. Unsupervised Learning of Digit Recognition Using Spike-Timing-Dependent Plasticity

    Directory of Open Access Journals (Sweden)

    Peter U. Diehl

    2015-08-01

    Full Text Available In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns, since most of such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e. conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks.

  11. The influence of single bursts versus single spikes at excitatory dendrodendritic synapses.

    Science.gov (United States)

    Masurkar, Arjun V; Chen, Wei R

    2012-02-01

    The synchronization of neuronal activity is thought to enhance information processing. There is much evidence supporting rhythmically bursting external tufted cells (ETCs) of the rodent olfactory bulb glomeruli coordinating the activation of glomerular interneurons and mitral cells via dendrodendritic excitation. However, as bursting has variable significance at axodendritic cortical synapses, it is not clear if ETC bursting imparts a specific functional advantage over the preliminary spike in dendrodendritic synaptic networks. To answer this question, we investigated the influence of single ETC bursts and spikes with the in vitro rat olfactory bulb preparation at different levels of processing, via calcium imaging of presynaptic ETC dendrites, dual electrical recording of ETC -interneuron synaptic pairs, and multicellular calcium imaging of ETC-induced population activity. Our findings supported single ETC bursts, versus single spikes, driving robust presynaptic calcium signaling, which in turn was associated with profound extension of the initial monosynaptic spike-driven dendrodendritic excitatory postsynaptic potential. This extension could be driven by either the spike-dependent or spike-independent components of the burst. At the population level, burst-induced excitation was more widespread and reliable compared with single spikes. This further supports the ETC network, in part due to a functional advantage of bursting at excitatory dendrodendritic synapses, coordinating synchronous activity at behaviorally relevant frequencies related to odor processing in vivo. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  12. FPGA IMPLEMENTATION OF ADAPTIVE INTEGRATED SPIKING NEURAL NETWORK FOR EFFICIENT IMAGE RECOGNITION SYSTEM

    Directory of Open Access Journals (Sweden)

    T. Pasupathi

    2014-05-01

    Full Text Available Image recognition is a technology which can be used in various applications such as medical image recognition systems, security, defense video tracking, and factory automation. In this paper we present a novel pipelined architecture of an adaptive integrated Artificial Neural Network for image recognition. In our proposed work we have combined the feature of spiking neuron concept with ANN to achieve the efficient architecture for image recognition. The set of training images are trained by ANN and target output has been identified. Real time videos are captured and then converted into frames for testing purpose and the image were recognized. The machine can operate at up to 40 frames/sec using images acquired from the camera. The system has been implemented on XC3S400 SPARTAN-3 Field Programmable Gate Arrays.

  13. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Science.gov (United States)

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-01-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation. PMID:24189331

  14. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Sheng-Ying Lai

    2013-11-01

    Full Text Available This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA and fuzzy C-means (FCM algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA. It is embedded in a System-on-Chip (SOC platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  15. Spikes matter for phase-locked bursting in inhibitory neurons

    Science.gov (United States)

    Jalil, Sajiya; Belykh, Igor; Shilnikov, Andrey

    2012-03-01

    We show that inhibitory networks composed of two endogenously bursting neurons can robustly display several coexistent phase-locked states in addition to stable antiphase and in-phase bursting. This work complements and enhances our recent result [Jalil, Belykh, and Shilnikov, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.81.045201 81, 045201(R) (2010)] that fast reciprocal inhibition can synchronize bursting neurons due to spike interactions. We reveal the role of spikes in generating multiple phase-locked states and demonstrate that this multistability is generic by analyzing diverse models of bursting networks with various fast inhibitory synapses; the individual cell models include the reduced leech heart interneuron, the Sherman model for pancreatic beta cells, and the Purkinje neuron model.

  16. A theory of loop formation and elimination by spike timing-dependent plasticity

    Directory of Open Access Journals (Sweden)

    James Kozloski

    2010-03-01

    Full Text Available We show that the local Spike Timing-Dependent Plasticity (STDP rule has the effect of regulating the trans-synaptic weights of loops of any length within a simulated network of neurons. We show that depending on STDP's polarity, functional loops are formed or eliminated in networks driven to normal spiking conditions by random, partially correlated inputs, where functional loops comprise synaptic weights that exceed a non-zero threshold. We further prove that STDP is a form of loop-regulating plasticity for the case of a linear network driven by noise. Thus a notable local synaptic learning rule makes a specific prediction about synapses in the brain in which standard STDP is present: that under normal spiking conditions, they should participate in predominantly feed-forward connections at all scales. Our model implies that any deviations from this prediction would require a substantial modification to the hypothesized role for standard STDP. Given its widespread occurrence in the brain, we predict that STDP could also regulate long range functional loops among individual neurons across all brain scales, up to, and including, the scale of global brain network topology.

  17. Spike neural models (part I: The Hodgkin-Huxley model

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2017-05-01

    Full Text Available Artificial neural networks, or ANNs, have grown a lot since their inception back in the 1940s. But no matter the changes, one of the most important components of neural networks is still the node, which represents the neuron. Within spiking neural networks, the node is especially important because it contains the functions and properties of neurons that are necessary for their network. One important aspect of neurons is the ionic flow which produces action potentials, or spikes. Forces of diffusion and electrostatic pressure work together with the physical properties of the cell to move ions around changing the cell membrane potential which ultimately produces the action potential. This tutorial reviews the Hodkgin-Huxley model and shows how it simulates the ionic flow of the giant squid axon via four differential equations. The model is implemented in Matlab using Euler's Method to approximate the differential equations. By using Euler's method, an extra parameter is created, the time step. This new parameter needs to be carefully considered or the results of the node may be impaired.

  18. Big Data Clustering via Community Detection and Hyperbolic Network Embedding in IoT Applications.

    Science.gov (United States)

    Karyotis, Vasileios; Tsitseklis, Konstantinos; Sotiropoulos, Konstantinos; Papavassiliou, Symeon

    2018-04-15

    In this paper, we present a novel data clustering framework for big sensory data produced by IoT applications. Based on a network representation of the relations among multi-dimensional data, data clustering is mapped to node clustering over the produced data graphs. To address the potential very large scale of such datasets/graphs that test the limits of state-of-the-art approaches, we map the problem of data clustering to a community detection one over the corresponding data graphs. Specifically, we propose a novel computational approach for enhancing the traditional Girvan-Newman (GN) community detection algorithm via hyperbolic network embedding. The data dependency graph is embedded in the hyperbolic space via Rigel embedding, allowing more efficient computation of edge-betweenness centrality needed in the GN algorithm. This allows for more efficient clustering of the nodes of the data graph in terms of modularity, without sacrificing considerable accuracy. In order to study the operation of our approach with respect to enhancing GN community detection, we employ various representative types of artificial complex networks, such as scale-free, small-world and random geometric topologies, and frequently-employed benchmark datasets for demonstrating its efficacy in terms of data clustering via community detection. Furthermore, we provide a proof-of-concept evaluation by applying the proposed framework over multi-dimensional datasets obtained from an operational smart-city/building IoT infrastructure provided by the Federated Interoperable Semantic IoT/cloud Testbeds and Applications (FIESTA-IoT) testbed federation. It is shown that the proposed framework can be indeed used for community detection/data clustering and exploited in various other IoT applications, such as performing more energy-efficient smart-city/building sensing.

  19. Multispectral embedding-based deep neural network for three-dimensional human pose recovery

    Science.gov (United States)

    Yu, Jialin; Sun, Jifeng

    2018-01-01

    Monocular image-based three-dimensional (3-D) human pose recovery aims to retrieve 3-D poses using the corresponding two-dimensional image features. Therefore, the pose recovery performance highly depends on the image representations. We propose a multispectral embedding-based deep neural network (MSEDNN) to automatically obtain the most discriminative features from multiple deep convolutional neural networks and then embed their penultimate fully connected layers into a low-dimensional manifold. This compact manifold can explore not only the optimum output from multiple deep networks but also the complementary properties of them. Furthermore, the distribution of each hierarchy discriminative manifold is sufficiently smooth so that the training process of our MSEDNN can be effectively implemented only using few labeled data. Our proposed network contains a body joint detector and a human pose regressor that are jointly trained. Extensive experiments conducted on four databases show that our proposed MSEDNN can achieve the best recovery performance compared with the state-of-the-art methods.

  20. Spiking Neural Networks with Unsupervised Learning Based on STDP Using Resistive Synaptic Devices and Analog CMOS Neuron Circuit.

    Science.gov (United States)

    Kwon, Min-Woo; Baek, Myung-Hyun; Hwang, Sungmin; Kim, Sungjun; Park, Byung-Gook

    2018-09-01

    We designed the CMOS analog integrate and fire (I&F) neuron circuit can drive resistive synaptic device. The neuron circuit consists of a current mirror for spatial integration, a capacitor for temporal integration, asymmetric negative and positive pulse generation part, a refractory part, and finally a back-propagation pulse generation part for learning of the synaptic devices. The resistive synaptic devices were fabricated using HfOx switching layer by atomic layer deposition (ALD). The resistive synaptic device had gradual set and reset characteristics and the conductance was adjusted by spike-timing-dependent-plasticity (STDP) learning rule. We carried out circuit simulation of synaptic device and CMOS neuron circuit. And we have developed an unsupervised spiking neural networks (SNNs) for 5 × 5 pattern recognition and classification using the neuron circuit and synaptic devices. The hardware-based SNNs can autonomously and efficiently control the weight updates of the synapses between neurons, without the aid of software calculations.

  1. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    International Nuclear Information System (INIS)

    Nasser, Hassan; Cessac, Bruno; Marre, Olivier

    2013-01-01

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  2. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for evaluation of spike-sorting algorithms.

    Science.gov (United States)

    Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T

    2015-04-30

    New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Real-time classification and sensor fusion with a spiking deep belief network.

    Science.gov (United States)

    O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael

    2013-01-01

    Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input.

  4. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    Science.gov (United States)

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  5. Dynamics of directional coupling underlying spike-wave discharges

    NARCIS (Netherlands)

    Sysoeva, M.V.; Luttjohann, A.K.; Luijtelaar, E.L.J.M. van; Sysoev, I.V.

    2016-01-01

    Purpose: Spike and wave discharges (SWDs), generated within cortico-thalamo-cortical networks, are the electroencephalographic biomarker of absence epilepsy. The current work aims to identify mechanisms of SWD initiation, maintenance and termination by the analyses of dynamics and directionality of

  6. Complexity optimization and high-throughput low-latency hardware implementation of a multi-electrode spike-sorting algorithm.

    Science.gov (United States)

    Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix

    2015-03-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.

  7. Spatiotemporal mapping of interictal spike propagation: a novel methodology applied to pediatric intracranial EEG recordings.

    Directory of Open Access Journals (Sweden)

    Samuel Tomlinson

    2016-12-01

    Full Text Available Synchronized cortical activity is implicated in both normative cognitive functioning andmany neurological disorders. For epilepsy patients with intractable seizures, irregular patterns ofsynchronization within the epileptogenic zone (EZ is believed to provide the network substratethrough which seizures initiate and propagate. Mapping the EZ prior to epilepsy surgery is critical fordetecting seizure networks in order to achieve post-surgical seizure control. However, automatedtechniques for characterizing epileptic networks have yet to gain traction in the clinical setting.Recent advances in signal processing and spike detection have made it possible to examine thespatiotemporal propagation of interictal spike discharges across the epileptic cortex. In this study, wepresent a novel methodology for detecting, extracting, and visualizing spike propagation anddemonstrate its potential utility as a biomarker for the epileptogenic zone. Eighteen pre-surgicalintracranial EEG recordings were obtained from pediatric patients ultimately experiencing favorable(i.e., seizure-free, n = 9 or unfavorable (i.e., seizure-persistent, n = 9 surgical outcomes. Novelalgorithms were applied to extract multi-channel spike discharges and visualize their spatiotemporalpropagation. Quantitative analysis of spike propagation was performed using trajectory clusteringand spatial autocorrelation techniques. Comparison of interictal propagation patterns revealed anincrease in trajectory organization (i.e., spatial autocorrelation among Sz-Free patients compared toSz-Persist patients. The pathophysiological basis and clinical implications of these findings areconsidered.

  8. The role of degree distribution in shaping the dynamics in networks of sparsely connected spiking neurons

    Directory of Open Access Journals (Sweden)

    Alex eRoxin

    2011-03-01

    Full Text Available Neuronal network models often assume a fixed probability of connectionbetween neurons. This assumption leads to random networks withbinomial in-degree and out-degree distributions which are relatively narrow. Here I study the effect of broaddegree distributions on network dynamics by interpolating between abinomial and a truncated powerlaw distribution for the in-degree andout-degree independently. This is done both for an inhibitory network(I network as well as for the recurrent excitatory connections in anetwork of excitatory and inhibitory neurons (EI network. In bothcases increasing the width of the in-degree distribution affects theglobal state of the network by driving transitions betweenasynchronous behavior and oscillations. This effect is reproduced ina simplified rate model which includes the heterogeneity in neuronalinput due to the in-degree of cells. On the other hand, broadeningthe out-degree distribution is shown to increase the fraction ofcommon inputs to pairs of neurons. This leads to increases in theamplitude of the cross-correlation (CC of synaptic currents. In thecase of the I network, despite strong oscillatory CCs in the currents, CCs of the membrane potential are low due to filtering and reset effects, leading to very weak CCs of the spikecount. In the asynchronous regime ofthe EI network, broadening the out-degree increases the amplitude ofCCs in the recurrent excitatory currents, while CC of the totalcurrent is essentially unaffected as are pairwise spikingcorrelations. This is due to a dynamic balance between excitatoryand inhibitory synaptic currents. In the oscillatory regime, changesin the out-degree can have a large effect on spiking correlations andeven on the qualitative dynamical state of the network.

  9. A Reinforcement Sensor Embedded Vertical Handoff Controller for Vehicular Heterogeneous Wireless Networks

    Directory of Open Access Journals (Sweden)

    Lin Ma

    2013-11-01

    Full Text Available Vehicular communication platforms that provide real-time access to wireless networks have drawn more and more attention in recent years. IEEE 802.11p is the main radio access technology that supports communication for high mobility terminals, however, due to its limited coverage, IEEE 802.11p is usually deployed by coupling with cellular networks to achieve seamless mobility. In a heterogeneous cellular/802.11p network, vehicular communication is characterized by its short time span in association with a wireless local area network (WLAN. Moreover, for the media access control (MAC scheme used for WLAN, the network throughput dramatically decreases with increasing user quantity. In response to these compelling problems, we propose a reinforcement sensor (RFS embedded vertical handoff control strategy to support mobility management. The RFS has online learning capability and can provide optimal handoff decisions in an adaptive fashion without prior knowledge. The algorithm integrates considerations including vehicular mobility, traffic load, handoff latency, and network status. Simulation results verify that the proposed algorithm can adaptively adjust the handoff strategy, allowing users to stay connected to the best network. Furthermore, the algorithm can ensure that RSUs are adequate, thereby guaranteeing a high quality user experience.

  10. Sketches of a hammer-impact, spiked-base, shear-wave source

    Science.gov (United States)

    Hasbrouck, W.P.

    1983-01-01

    Generation of shear waves in shallow seismic investigations (those to depths usually less than 100 m) can be accomplished by horizontally striking with a hammer either the end of a wood plank or metal structure embedded at the ground surface. The dimensioned sketches of this report are of a steel, hammer-impact, spiked-base, shear-wave source. It has been used on outcrops and in a desert environment and for conducting experiments on the effect of rotating source direction.

  11. A Fully Automated Approach to Spike Sorting.

    Science.gov (United States)

    Chung, Jason E; Magland, Jeremy F; Barnett, Alex H; Tolosa, Vanessa M; Tooker, Angela C; Lee, Kye Y; Shah, Kedar G; Felix, Sarah H; Frank, Loren M; Greengard, Leslie F

    2017-09-13

    Understanding the detailed dynamics of neuronal networks will require the simultaneous measurement of spike trains from hundreds of neurons (or more). Currently, approaches to extracting spike times and labels from raw data are time consuming, lack standardization, and involve manual intervention, making it difficult to maintain data provenance and assess the quality of scientific results. Here, we describe an automated clustering approach and associated software package that addresses these problems and provides novel cluster quality metrics. We show that our approach has accuracy comparable to or exceeding that achieved using manual or semi-manual techniques with desktop central processing unit (CPU) runtimes faster than acquisition time for up to hundreds of electrodes. Moreover, a single choice of parameters in the algorithm is effective for a variety of electrode geometries and across multiple brain regions. This algorithm has the potential to enable reproducible and automated spike sorting of larger scale recordings than is currently possible. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Big Data Clustering via Community Detection and Hyperbolic Network Embedding in IoT Applications

    Directory of Open Access Journals (Sweden)

    Vasileios Karyotis

    2018-04-01

    Full Text Available In this paper, we present a novel data clustering framework for big sensory data produced by IoT applications. Based on a network representation of the relations among multi-dimensional data, data clustering is mapped to node clustering over the produced data graphs. To address the potential very large scale of such datasets/graphs that test the limits of state-of-the-art approaches, we map the problem of data clustering to a community detection one over the corresponding data graphs. Specifically, we propose a novel computational approach for enhancing the traditional Girvan–Newman (GN community detection algorithm via hyperbolic network embedding. The data dependency graph is embedded in the hyperbolic space via Rigel embedding, allowing more efficient computation of edge-betweenness centrality needed in the GN algorithm. This allows for more efficient clustering of the nodes of the data graph in terms of modularity, without sacrificing considerable accuracy. In order to study the operation of our approach with respect to enhancing GN community detection, we employ various representative types of artificial complex networks, such as scale-free, small-world and random geometric topologies, and frequently-employed benchmark datasets for demonstrating its efficacy in terms of data clustering via community detection. Furthermore, we provide a proof-of-concept evaluation by applying the proposed framework over multi-dimensional datasets obtained from an operational smart-city/building IoT infrastructure provided by the Federated Interoperable Semantic IoT/cloud Testbeds and Applications (FIESTA-IoT testbed federation. It is shown that the proposed framework can be indeed used for community detection/data clustering and exploited in various other IoT applications, such as performing more energy-efficient smart-city/building sensing.

  13. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    Directory of Open Access Journals (Sweden)

    Andreas Stöckel

    2017-08-01

    Full Text Available Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP. Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output.

  14. Unsupervised Document Embedding With CNNs

    OpenAIRE

    Liu, Chundi; Zhao, Shunan; Volkovs, Maksims

    2017-01-01

    We propose a new model for unsupervised document embedding. Leading existing approaches either require complex inference or use recurrent neural networks (RNN) that are difficult to parallelize. We take a different route and develop a convolutional neural network (CNN) embedding model. Our CNN architecture is fully parallelizable resulting in over 10x speedup in inference time over RNN models. Parallelizable architecture enables to train deeper models where each successive layer has increasin...

  15. Brainmapping Neuronal Networks in Children with Continuous Spikes and Waves during Slow Sleep as revealed by DICS and RPDC

    OpenAIRE

    Dierck, Carina

    2018-01-01

    CSWS is an age-related epileptic encephalopathy consisting of the triad of seizures, neuropsychological impairment and a specific EEG-pattern. This EEG-pattern is characterized by spike-and-wave-discharges emphasized during non-REM sleep. Until now, little has been known about the pathophysiologic processes. So far research approaches on the underlying neuronal network have been based on techniques with a good spatial but poor temporal resolution like fMRI and FDG-PET. In this study the se...

  16. Spike-Based Bayesian-Hebbian Learning of Temporal Sequences.

    Directory of Open Access Journals (Sweden)

    Philip J Tully

    2016-05-01

    Full Text Available Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model's feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx. We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison.

  17. Embedded Web Technology: Applying World Wide Web Standards to Embedded Systems

    Science.gov (United States)

    Ponyik, Joseph G.; York, David W.

    2002-01-01

    Embedded Systems have traditionally been developed in a highly customized manner. The user interface hardware and software along with the interface to the embedded system are typically unique to the system for which they are built, resulting in extra cost to the system in terms of development time and maintenance effort. World Wide Web standards have been developed in the passed ten years with the goal of allowing servers and clients to intemperate seamlessly. The client and server systems can consist of differing hardware and software platforms but the World Wide Web standards allow them to interface without knowing about the details of system at the other end of the interface. Embedded Web Technology is the merging of Embedded Systems with the World Wide Web. Embedded Web Technology decreases the cost of developing and maintaining the user interface by allowing the user to interface to the embedded system through a web browser running on a standard personal computer. Embedded Web Technology can also be used to simplify an Embedded System's internal network.

  18. Supervised Learning Using Spike-Timing-Dependent Plasticity of Memristive Synapses.

    Science.gov (United States)

    Nishitani, Yu; Kaneko, Yukihiro; Ueda, Michihito

    2015-12-01

    We propose a supervised learning model that enables error backpropagation for spiking neural network hardware. The method is modeled by modifying an existing model to suit the hardware implementation. An example of a network circuit for the model is also presented. In this circuit, a three-terminal ferroelectric memristor (3T-FeMEM), which is a field-effect transistor with a gate insulator composed of ferroelectric materials, is used as an electric synapse device to store the analog synaptic weight. Our model can be implemented by reflecting the network error to the write voltage of the 3T-FeMEMs and introducing a spike-timing-dependent learning function to the device. An XOR problem was successfully demonstrated as a benchmark learning by numerical simulations using the circuit properties to estimate the learning performance. In principle, the learning time per step of this supervised learning model and the circuit is independent of the number of neurons in each layer, promising a high-speed and low-power calculation in large-scale neural networks.

  19. Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns.

    Science.gov (United States)

    Matsubara, Takashi

    2017-01-01

    Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.

  20. Diverse spike-timing-dependent plasticity based on multilevel HfO x memristor for neuromorphic computing

    Science.gov (United States)

    Lu, Ke; Li, Yi; He, Wei-Fan; Chen, Jia; Zhou, Ya-Xiong; Duan, Nian; Jin, Miao-Miao; Gu, Wei; Xue, Kan-Hao; Sun, Hua-Jun; Miao, Xiang-Shui

    2018-06-01

    Memristors have emerged as promising candidates for artificial synaptic devices, serving as the building block of brain-inspired neuromorphic computing. In this letter, we developed a Pt/HfO x /Ti memristor with nonvolatile multilevel resistive switching behaviors due to the evolution of the conductive filaments and the variation in the Schottky barrier. Diverse state-dependent spike-timing-dependent-plasticity (STDP) functions were implemented with different initial resistance states. The measured STDP forms were adopted as the learning rule for a three-layer spiking neural network which achieves a 75.74% recognition accuracy for MNIST handwritten digit dataset. This work has shown the capability of memristive synapse in spiking neural networks for pattern recognition application.

  1. Performance evaluation of multi-channel wireless mesh networks with embedded systems.

    Science.gov (United States)

    Lam, Jun Huy; Lee, Sang-Gon; Tan, Whye Kit

    2012-01-01

    Many commercial wireless mesh network (WMN) products are available in the marketplace with their own proprietary standards, but interoperability among the different vendors is not possible. Open source communities have their own WMN implementation in accordance with the IEEE 802.11s draft standard, Linux open80211s project and FreeBSD WMN implementation. While some studies have focused on the test bed of WMNs based on the open80211s project, none are based on the FreeBSD. In this paper, we built an embedded system using the FreeBSD WMN implementation that utilizes two channels and evaluated its performance. This implementation allows the legacy system to connect to the WMN independent of the type of platform and distributes the load between the two non-overlapping channels. One channel is used for the backhaul connection and the other one is used to connect to the stations to wireless mesh network. By using the power efficient 802.11 technology, this device can also be used as a gateway for the wireless sensor network (WSN).

  2. Computational modeling of seizure dynamics using coupled neuronal networks: factors shaping epileptiform activity.

    Directory of Open Access Journals (Sweden)

    Sebastien Naze

    2015-05-01

    Full Text Available Epileptic seizure dynamics span multiple scales in space and time. Understanding seizure mechanisms requires identifying the relations between seizure components within and across these scales, together with the analysis of their dynamical repertoire. Mathematical models have been developed to reproduce seizure dynamics across scales ranging from the single neuron to the neural population. In this study, we develop a network model of spiking neurons and systematically investigate the conditions, under which the network displays the emergent dynamic behaviors known from the Epileptor, which is a well-investigated abstract model of epileptic neural activity. This approach allows us to study the biophysical parameters and variables leading to epileptiform discharges at cellular and network levels. Our network model is composed of two neuronal populations, characterized by fast excitatory bursting neurons and regular spiking inhibitory neurons, embedded in a common extracellular environment represented by a slow variable. By systematically analyzing the parameter landscape offered by the simulation framework, we reproduce typical sequences of neural activity observed during status epilepticus. We find that exogenous fluctuations from extracellular environment and electro-tonic couplings play a major role in the progression of the seizure, which supports previous studies and further validates our model. We also investigate the influence of chemical synaptic coupling in the generation of spontaneous seizure-like events. Our results argue towards a temporal shift of typical spike waves with fast discharges as synaptic strengths are varied. We demonstrate that spike waves, including interictal spikes, are generated primarily by inhibitory neurons, whereas fast discharges during the wave part are due to excitatory neurons. Simulated traces are compared with in vivo experimental data from rodents at different stages of the disorder. We draw the conclusion

  3. Spike persistence and normalization in benign epilepsy with centrotemporal spikes - Implications for management.

    Science.gov (United States)

    Kim, Hunmin; Kim, Soo Yeon; Lim, Byung Chan; Hwang, Hee; Chae, Jong-Hee; Choi, Jieun; Kim, Ki Joong; Dlugos, Dennis J

    2018-05-10

    This study was performed 1) to determine the timing of spike normalization in patients with benign epilepsy with centrotemporal spikes (BECTS); 2) to identify relationships between age of seizure onset, age of spike normalization, years of spike persistence and treatment; and 3) to assess final outcomes between groups of patients with or without spikes at the time of medication tapering. Retrospective analysis of BECTS patients confirmed by clinical data, including age of onset, seizure semiology and serial electroencephalography (EEG) from diagnosis to remission. Age at spike normalization, years of spike persistence, and time of treatment onset to spike normalization were assessed. Final seizure and EEG outcome were compared between the groups with or without spikes at the time of AED tapering. One hundred and thirty-four patients were included. Mean age at seizure onset was 7.52 ± 2.11 years. Mean age at spike normalization was 11.89 ± 2.11 (range: 6.3-16.8) years. Mean time of treatment onset to spike normalization was 4.11 ± 2.13 (range: 0.24-10.08) years. Younger age of seizure onset was correlated with longer duration of spike persistence (r = -0.41, p < 0.001). In treated patients, spikes persisted for 4.1 ± 1.95 years, compared with 2.9 ± 1.97 years in untreated patients. No patients had recurrent seizures after AED was discontinued, regardless of the presence/absence of spikes at time of AED tapering. Years of spike persistence was longer in early onset BECTS patients. Treatment with AEDs did not shorten years of spike persistence. Persistence of spikes at time of treatment withdrawal was not associated with seizure recurrence. Copyright © 2018 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  4. Simulating large-scale spiking neuronal networks with NEST

    OpenAIRE

    Schücker, Jannis; Eppler, Jochen Martin

    2014-01-01

    The Neural Simulation Tool NEST [1, www.nest-simulator.org] is the simulator for spiking neural networkmodels of the HBP that focuses on the dynamics, size and structure of neural systems rather than on theexact morphology of individual neurons. Its simulation kernel is written in C++ and it runs on computinghardware ranging from simple laptops to clusters and supercomputers with thousands of processor cores.The development of NEST is coordinated by the NEST Initiative [www.nest-initiative.or...

  5. Inferior Olive HCN1 Channels Coordinate Synaptic Integration and Complex Spike Timing

    Directory of Open Access Journals (Sweden)

    Derek L.F. Garden

    2018-02-01

    Full Text Available Cerebellar climbing-fiber-mediated complex spikes originate from neurons in the inferior olive (IO, are critical for motor coordination, and are central to theories of cerebellar learning. Hyperpolarization-activated cyclic-nucleotide-gated (HCN channels expressed by IO neurons have been considered as pacemaker currents important for oscillatory and resonant dynamics. Here, we demonstrate that in vitro, network actions of HCN1 channels enable bidirectional glutamatergic synaptic responses, while local actions of HCN1 channels determine the timing and waveform of synaptically driven action potentials. These roles are distinct from, and may complement, proposed pacemaker functions of HCN channels. We find that in behaving animals HCN1 channels reduce variability in the timing of cerebellar complex spikes, which serve as a readout of IO spiking. Our results suggest that spatially distributed actions of HCN1 channels enable the IO to implement network-wide rules for synaptic integration that modulate the timing of cerebellar climbing fiber signals.

  6. Galactic center gamma-ray excess from dark matter annihilation: is there a black hole spike?

    Science.gov (United States)

    Fields, Brian D; Shapiro, Stuart L; Shelton, Jessie

    2014-10-10

    If the supermassive black hole Sgr A* at the center of the Milky Way grew adiabatically from an initial seed embedded in a Navarro-Frenk-White dark matter (DM) halo, then the DM profile near the hole has steepened into a spike. We calculate the dramatic enhancement to the gamma-ray flux from the Galactic center (GC) from such a spike if the 1-3 GeV excess observed in Fermi data is due to DM annihilations. We find that for the parameter values favored in recent fits, the point-source-like flux from the spike is 35 times greater than the flux from the inner 1° of the halo, far exceeding all Fermi point source detections near the GC. We consider the dependence of the spike signal on astrophysical and particle parameters and conclude that if the GC excess is due to DM, then a canonical adiabatic spike is disfavored by the data. We discuss alternative Galactic histories that predict different spike signals, including (i) the nonadiabatic growth of the black hole, possibly associated with halo and/or black hole mergers, (ii) gravitational interaction of DM with baryons in the dense core, such as heating by stars, or (iii) DM self-interactions. We emphasize that the spike signal is sensitive to a different combination of particle parameters than the halo signal and that the inclusion of a spike component to any DM signal in future analyses would provide novel information about both the history of the GC and the particle physics of DM annihilations.

  7. Electricity price forecasting using Enhanced Probability Neural Network

    International Nuclear Information System (INIS)

    Lin, Whei-Min; Gow, Hong-Jey; Tsai, Ming-Tang

    2010-01-01

    This paper proposes a price forecasting system for electric market participants to reduce the risk of price volatility. Combining the Probability Neural Network (PNN) and Orthogonal Experimental Design (OED), an Enhanced Probability Neural Network (EPNN) is proposed in the solving process. In this paper, the Locational Marginal Price (LMP), system load and temperature of PJM system were collected and the data clusters were embedded in the Excel Database according to the year, season, workday, and weekend. With the OED to smooth parameters in the EPNN, the forecasting error can be improved during the training process to promote the accuracy and reliability where even the ''spikes'' can be tracked closely. Simulation results show the effectiveness of the proposed EPNN to provide quality information in a price volatile environment. (author)

  8. Recurrent neural networks with specialized word embeddings for health-domain named-entity recognition.

    Science.gov (United States)

    Jauregi Unanue, Iñigo; Zare Borzeshi, Ehsan; Piccardi, Massimo

    2017-12-01

    Previous state-of-the-art systems on Drug Name Recognition (DNR) and Clinical Concept Extraction (CCE) have focused on a combination of text "feature engineering" and conventional machine learning algorithms such as conditional random fields and support vector machines. However, developing good features is inherently heavily time-consuming. Conversely, more modern machine learning approaches such as recurrent neural networks (RNNs) have proved capable of automatically learning effective features from either random assignments or automated word "embeddings". (i) To create a highly accurate DNR and CCE system that avoids conventional, time-consuming feature engineering. (ii) To create richer, more specialized word embeddings by using health domain datasets such as MIMIC-III. (iii) To evaluate our systems over three contemporary datasets. Two deep learning methods, namely the Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model is set as the baseline to compare the deep learning systems to a traditional machine learning approach. The same features are used for all the models. We have obtained the best results with the Bidirectional LSTM-CRF model, which has outperformed all previously proposed systems. The specialized embeddings have helped to cover unusual words in DrugBank and MedLine, but not in the i2b2/VA dataset. We present a state-of-the-art system for DNR and CCE. Automated word embeddings has allowed us to avoid costly feature engineering and achieve higher accuracy. Nevertheless, the embeddings need to be retrained over datasets that are adequate for the domain, in order to adequately cover the domain-specific vocabulary. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. RESLanjut: The learning media for improve students understanding in embedded systems

    Science.gov (United States)

    Indrianto, Susanti, Meilia Nur Indah; Karina, Djunaidi

    2017-08-01

    The use of network in embedded system can be done with many kinds of network, with the use of mobile phones, bluetooths, modems, ethernet cards, wireless technology and so on. Using network in embedded system could help people to do remote controlling. On previous research, researchers found that many students have the ability to comprehend the basic concept of embedded system. They could also make embedded system tools but without network integration. And for that, a development is needed for the embedded system module. The embedded system practicum module design needs a prototype method in order to achieve the desired goal. The prototype method is often used in the real world. Or even, a prototype method is a part of products that consist of logic expression or external physical interface. The embedded system practicum module is meant to increase student comprehension of embedded system course, and also to encourage students to innovate on technology based tools. It is also meant to help teachers to teach the embedded system concept on the course. The student comprehension is hoped to increase with the use of practicum course.

  10. A novel automated spike sorting algorithm with adaptable feature extraction.

    Science.gov (United States)

    Bestel, Robert; Daus, Andreas W; Thielemann, Christiane

    2012-10-15

    To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Directory of Open Access Journals (Sweden)

    Rodrigo Cofré

    2018-01-01

    Full Text Available The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics.

  12. Real-time radionuclide identification in γ-emitter mixtures based on spiking neural network

    International Nuclear Information System (INIS)

    Bobin, C.; Bichler, O.; Lourenço, V.; Thiam, C.; Thévenin, M.

    2016-01-01

    Portal radiation monitors dedicated to the prevention of illegal traffic of nuclear materials at international borders need to deliver as fast as possible a radionuclide identification of a potential radiological threat. Spectrometry techniques applied to identify the radionuclides contributing to γ-emitter mixtures are usually performed using off-line spectrum analysis. As an alternative to these usual methods, a real-time processing based on an artificial neural network and Bayes’ rule is proposed for fast radionuclide identification. The validation of this real-time approach was carried out using γ-emitter spectra ( 241 Am, 133 Ba, 207 Bi, 60 Co, 137 Cs) obtained with a high-efficiency well-type NaI(Tl). The first tests showed that the proposed algorithm enables a fast identification of each γ-emitting radionuclide using the information given by the whole spectrum. Based on an iterative process, the on-line analysis only needs low-statistics spectra without energy calibration to identify the nature of a radiological threat. - Highlights: • A fast radionuclide identification algorithm applicable in spectroscopic portal monitors is presented. • The proposed algorithm combines a Bayesian sequential approach and a spiking neural network. • The algorithm was validated using the mixture of γ-emitter spectra provided by a well-type NaI(Tl) detector. • The radionuclide identification process is implemented using the whole γ-spectrum without energy calibration.

  13. Goulds Belt, Interstellar Clouds, and the Eocene-Oligocene Helium-3 Spike

    Science.gov (United States)

    Rubincam, David Parry

    2015-01-01

    Drag from hydrogen in the interstellar cloud which formed Gould's Belt may have sent small meteoroids with embedded helium to the Earth, perhaps explaining part or all of the (sup 3) He spike seen in the sedimentary record at the Eocene-Oligocene transition. Assuming the Solar System passed through part of the cloud, meteoroids in the asteroid belt up to centimeter size may have been dragged to the resonances, where their orbital eccentricities were pumped up into Earth-crossing orbits.

  14. Successful reconstruction of a physiological circuit with known connectivity from spiking activity alone.

    Directory of Open Access Journals (Sweden)

    Felipe Gerhard

    Full Text Available Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities.

  15. Spike-Based Bayesian-Hebbian Learning of Temporal Sequences

    DEFF Research Database (Denmark)

    Tully, Philip J; Lindén, Henrik; Hennig, Matthias H

    2016-01-01

    Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed...... in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods...

  16. Embedded Systems Design with FPGAs

    CERN Document Server

    Pnevmatikatos, Dionisios; Sklavos, Nicolas

    2013-01-01

    This book presents methodologies for modern applications of embedded systems design, using field programmable gate array (FPGA) devices.  Coverage includes state-of-the-art research from academia and industry on a wide range of topics, including advanced electronic design automation (EDA), novel system architectures, embedded processors, arithmetic, dynamic reconfiguration and applications. Describes a variety of methodologies for modern embedded systems design;  Implements methodologies presented on FPGAs; Covers a wide variety of applications for reconfigurable embedded systems, including Bioinformatics, Communications and networking, Application acceleration, Medical solutions, Experiments for high energy physics, Astronomy, Aerospace, Biologically inspired systems and Computational fluid dynamics (CFD).

  17. Convolutional networks for fast, energy-efficient neuromorphic computing.

    Science.gov (United States)

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  18. Fluctuating inhibitory inputs promote reliable spiking at theta frequencies in hippocampal interneurons

    Directory of Open Access Journals (Sweden)

    Duluxan eSritharan

    2012-05-01

    Full Text Available Theta frequency (4-12 Hz rhythms in the hippocampus play important roles in learning and memory. CA1 interneurons located at the stratum lacunosum-moleculare and radiatum junction (LM/RAD are thought to contribute to hippocampal theta population activities by rhythmically pacing pyramidal cells with inhibitory postsynaptic potentials. This implies that LM/RAD cells need to fire reliably at theta frequencies in vivo. To determine whether this could occur, we use biophysically-based LM/RAD model cells and apply different cholinergic and synaptic inputs to simulate in vivo-like network environments. We assess spike reliabilities and spiking frequencies, identifying biophysical properties and network conditions that best promote reliable theta spiking. We find that synaptic background activities that feature large inhibitory, but not excitatory, fluctuations are essential. This suggests that strong inhibitory input to these cells is vital for them to be able to contribute to population theta activities. Furthermore, we find that Type I-like oscillator models produced by augmented persistent sodium currents (INap or diminished A type potassium currents (IA enhance reliable spiking at lower theta frequencies. These Type I-like models are also the most responsive to large inhibitory fluctuations and can fire more reliably under such conditions. In previous work, we showed that INap and IA are largely responsible for establishing LM/RAD cells’ subthreshold activities. Taken together with this study, we see that while both these currents are important for subthreshold theta fluctuations and reliable theta spiking, they contribute in different ways – INap to reliable theta spiking and subthreshold activity generation, and IA to subthreshold activities at theta frequencies. This suggests that linking subthreshold and suprathreshold activities should be done with consideration of both in vivo contexts and biophysical specifics.

  19. A neuro-inspired spike-based PID motor controller for multi-motor robots with low cost FPGAs.

    Science.gov (United States)

    Jimenez-Fernandez, Angel; Jimenez-Moreno, Gabriel; Linares-Barranco, Alejandro; Dominguez-Morales, Manuel J; Paz-Vicente, Rafael; Civit-Balcells, Anton

    2012-01-01

    In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN), which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control.

  20. The dynamic relationship between cerebellar Purkinje cell simple spikes and the spikelet number of complex spikes.

    Science.gov (United States)

    Burroughs, Amelia; Wise, Andrew K; Xiao, Jianqiang; Houghton, Conor; Tang, Tianyu; Suh, Colleen Y; Lang, Eric J; Apps, Richard; Cerminara, Nadia L

    2017-01-01

    Purkinje cells are the sole output of the cerebellar cortex and fire two distinct types of action potential: simple spikes and complex spikes. Previous studies have mainly considered complex spikes as unitary events, even though the waveform is composed of varying numbers of spikelets. The extent to which differences in spikelet number affect simple spike activity (and vice versa) remains unclear. We found that complex spikes with greater numbers of spikelets are preceded by higher simple spike firing rates but, following the complex spike, simple spikes are reduced in a manner that is graded with spikelet number. This dynamic interaction has important implications for cerebellar information processing, and suggests that complex spike spikelet number may maintain Purkinje cells within their operational range. Purkinje cells are central to cerebellar function because they form the sole output of the cerebellar cortex. They exhibit two distinct types of action potential: simple spikes and complex spikes. It is widely accepted that interaction between these two types of impulse is central to cerebellar cortical information processing. Previous investigations of the interactions between simple spikes and complex spikes have mainly considered complex spikes as unitary events. However, complex spikes are composed of an initial large spike followed by a number of secondary components, termed spikelets. The number of spikelets within individual complex spikes is highly variable and the extent to which differences in complex spike spikelet number affects simple spike activity (and vice versa) remains poorly understood. In anaesthetized adult rats, we have found that Purkinje cells recorded from the posterior lobe vermis and hemisphere have high simple spike firing frequencies that precede complex spikes with greater numbers of spikelets. This finding was also evident in a small sample of Purkinje cells recorded from the posterior lobe hemisphere in awake cats. In addition

  1. TINY TCP/IP PROTOCOL SUITE FOR EMBEDDED SYSTEMS WITH 32 BIT MICROCONTROLLER

    OpenAIRE

    Mr. Praful M. Godhankar; Mr. Maske Vishnu Dattatraya; Prof. Shahzia Sayyad

    2015-01-01

    The scope of embedded devices is increasing day by day and the demand will be further more when networking technology is incorporated into these devices. Many embedded systems not only communicate with each other, but also with computers using a network. All systems connected to the Internet, wireless networks such as WLAN and GPRS, and many local area networks communicate using the standard TCP/IP protocol suite. An embedded system may have as little memory, the memory constraints make progr...

  2. Event management for large scale event-driven digital hardware spiking neural networks.

    Science.gov (United States)

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. The chronotron: a neuron that learns to fire temporally precise spike patterns.

    Directory of Open Access Journals (Sweden)

    Răzvan V Florian

    Full Text Available In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons, one that provides high memory capacity (E-learning, and one that has a higher biological plausibility (I-learning. With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

  4. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Directory of Open Access Journals (Sweden)

    Laureline Logiaco

    2015-08-01

    Full Text Available The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  5. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Science.gov (United States)

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-08-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  6. Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity.

    Directory of Open Access Journals (Sweden)

    Christian Albers

    Full Text Available Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP. Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious and strong (teacher spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.

  7. Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity.

    Science.gov (United States)

    Albers, Christian; Westkott, Maren; Pawelzik, Klaus

    2016-01-01

    Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.

  8. Structural Covariance Network of Cortical Gyrification in Benign Childhood Epilepsy with Centrotemporal Spikes

    Directory of Open Access Journals (Sweden)

    Lin Jiang

    2018-02-01

    Full Text Available Benign childhood epilepsy with centrotemporal spikes (BECTS is associated with cognitive and language problems. According to recent studies, disruptions in brain structure and function in children with BECTS are beyond a Rolandic focus, suggesting atypical cortical development. However, previous studies utilizing surface-based metrics (e.g., cortical gyrification and their structural covariance networks at high resolution in children with BECTS are limited. Twenty-six children with BECTS (15 males/11 females; 10.35 ± 2.91 years and 26 demographically matched controls (15 males/11 females; 11.35 ± 2.51 years were included in this study and subjected to high-resolution structural brain MRI scans. The gyrification index was calculated, and structural brain networks were reconstructed based on the covariance of the cortical folding. In the BECTS group, significantly increased gyrification was observed in the bilateral Sylvain fissures and the left pars triangularis, temporal, rostral middle frontal, lateral orbitofrontal, and supramarginal areas (cluster-corrected p < 0.05. Global brain network measures were not significantly different between the groups; however, the nodal alterations were most pronounced in the insular, frontal, temporal, and occipital lobes (FDR corrected, p < 0.05. In children with BECTS, brain hubs increased in number and tended to shift to sensorimotor and temporal areas. Furthermore, we observed significantly positive relationships between the gyrification index and age (vertex p < 0.001, cluster-level correction as well as duration of epilepsy (vertex p < 0.001, cluster-level correction. Our results suggest that BECTS may be a condition that features abnormal over-folding of the Sylvian fissures and uncoordinated development of structural wiring, disrupted nodal profiles of centrality, and shifted hub distribution, which potentially represents a neuroanatomical hallmark of BECTS in the

  9. Roll-to-roll embedded conductive structures integrated into organic photovoltaic devices

    International Nuclear Information System (INIS)

    Van de Wiel, H J; Galagan, Y; Van Lammeren, T J; De Riet, J F J; Gilot, J; Nagelkerke, M G M; Lelieveld, R H C A T; Shanmugam, S; Pagudala, A; Groen, W A; Hui, D

    2013-01-01

    Highly conductive screen printed metallic (silver) structures (current collecting grids) combined with poly(3,4-ethylenedioxythiophene):poly(styrene sulfonate) (PEDOT:PSS) are a viable replacement for indium tin oxide (ITO) and inkjet printed silver as transparent electrode materials. To provide successful integration into organic photovoltaic (OPV) devices, screen printed silver current collecting grids should be embedded into a substrate to avoid topology issues. In this study micron-thick conductive structures are embedded and integrated into OPV devices. The embedded structures are produced roll-to-roll with optimized process settings and materials. Topology measurements show that the embedded grids are well suited for integration into OPV devices since the surface is almost without spikes and has low surface roughness. JV measurements of OPV devices with embedded structures on a polyethylene terephthalate/silicon nitride (PET/SiN) substrate show an efficiency of 2.15%, which is significantly higher than identical flexible devices with ITO (1.02%) and inkjet printed silver (1.48%). The use of embedded screen printed silver instead of ITO and inkjet printed silver in OPV devices will allow for higher efficiency devices which can be produced with larger design and process freedom. (paper)

  10. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    Science.gov (United States)

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Emergence of Slow Collective Oscillations in Neural Networks with Spike-Timing Dependent Plasticity

    Science.gov (United States)

    Mikkelsen, Kaare; Imparato, Alberto; Torcini, Alessandro

    2013-05-01

    The collective dynamics of excitatory pulse coupled neurons with spike-timing dependent plasticity is studied. The introduction of spike-timing dependent plasticity induces persistent irregular oscillations between strongly and weakly synchronized states, reminiscent of brain activity during slow-wave sleep. We explain the oscillations by a mechanism, the Sisyphus Effect, caused by a continuous feedback between the synaptic adjustments and the coherence in the neural firing. Due to this effect, the synaptic weights have oscillating equilibrium values, and this prevents the system from relaxing into a stationary macroscopic state.

  12. Space-Time Dynamics of Membrane Currents Evolve to Shape Excitation, Spiking, and Inhibition in the Cortex at Small and Large Scales

    DEFF Research Database (Denmark)

    Roland, Per E.

    2017-01-01

    positions. After transition to active spiking states, larger structured zones with active spiking neurons appear, propagating through the cortical network, driving it into various forms of widespread excitation, and engaging the network from microscopic scales to whole cortical areas. At each engaged...... cortical site, the amount of excitation in the network, after a delay, becomes matched by an equal amount of space-time fine-tuned inhibition that might be instrumental in driving the dynamics toward perception and action....

  13. Brainlab: A Python Toolkit to Aid in the Design, Simulation, and Analysis of Spiking Neural Networks with the NeoCortical Simulator.

    Science.gov (United States)

    Drewes, Rich; Zou, Quan; Goodman, Philip H

    2009-01-01

    Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading "glue" tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS.

  14. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition.

    Directory of Open Access Journals (Sweden)

    Johannes Bill

    Full Text Available During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.

  15. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition

    Science.gov (United States)

    Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert

    2015-01-01

    During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input. PMID:26284370

  16. System-Level Design of a 64-Channel Low Power Neural Spike Recording Sensor.

    Science.gov (United States)

    Delgado-Restituto, Manuel; Rodriguez-Perez, Alberto; Darie, Angela; Soto-Sanchez, Cristina; Fernandez-Jover, Eduardo; Rodriguez-Vazquez, Angel

    2017-04-01

    This paper reports an integrated 64-channel neural spike recording sensor, together with all the circuitry to process and configure the channels, process the neural data, transmit via a wireless link the information and receive the required instructions. Neural signals are acquired, filtered, digitized and compressed in the channels. Additionally, each channel implements an auto-calibration algorithm which individually configures the transfer characteristics of the recording site. The system has two transmission modes; in one case the information captured by the channels is sent as uncompressed raw data; in the other, feature vectors extracted from the detected neural spikes are released. Data streams coming from the channels are serialized by the embedded digital processor. Experimental results, including in vivo measurements, show that the power consumption of the complete system is lower than 330 μW.

  17. Amplitude-aware permutation entropy: Illustration in spike detection and signal segmentation.

    Science.gov (United States)

    Azami, Hamed; Escudero, Javier

    2016-05-01

    Signal segmentation and spike detection are two important biomedical signal processing applications. Often, non-stationary signals must be segmented into piece-wise stationary epochs or spikes need to be found among a background of noise before being further analyzed. Permutation entropy (PE) has been proposed to evaluate the irregularity of a time series. PE is conceptually simple, structurally robust to artifacts, and computationally fast. It has been extensively used in many applications, but it has two key shortcomings. First, when a signal is symbolized using the Bandt-Pompe procedure, only the order of the amplitude values is considered and information regarding the amplitudes is discarded. Second, in the PE, the effect of equal amplitude values in each embedded vector is not addressed. To address these issues, we propose a new entropy measure based on PE: the amplitude-aware permutation entropy (AAPE). AAPE is sensitive to the changes in the amplitude, in addition to the frequency, of the signals thanks to it being more flexible than the classical PE in the quantification of the signal motifs. To demonstrate how the AAPE method can enhance the quality of the signal segmentation and spike detection, a set of synthetic and realistic synthetic neuronal signals, electroencephalograms and neuronal data are processed. We compare the performance of AAPE in these problems against state-of-the-art approaches and evaluate the significance of the differences with a repeated ANOVA with post hoc Tukey's test. In signal segmentation, the accuracy of AAPE-based method is higher than conventional segmentation methods. AAPE also leads to more robust results in the presence of noise. The spike detection results show that AAPE can detect spikes well, even when presented with single-sample spikes, unlike PE. For multi-sample spikes, the changes in AAPE are larger than in PE. We introduce a new entropy metric, AAPE, that enables us to consider amplitude information in the

  18. Doubly stochastic coherence in complex neuronal networks

    Science.gov (United States)

    Gao, Yang; Wang, Jianjun

    2012-11-01

    A system composed of coupled FitzHugh-Nagumo neurons with various topological structures is investigated under the co-presence of two independently additive and multiplicative Gaussian white noises, in which particular attention is paid to the neuronal networks spiking regularity. As the additive noise intensity and the multiplicative noise intensity are simultaneously adjusted to optimal values, the temporal periodicity of the output of the system reaches the maximum, indicating the occurrence of doubly stochastic coherence. The network topology randomness exerts different influences on the temporal coherence of the spiking oscillation for dissimilar coupling strength regimes. At a small coupling strength, the spiking regularity shows nearly no difference in the regular, small-world, and completely random networks. At an intermediate coupling strength, the temporal periodicity in a small-world neuronal network can be improved slightly by adding a small fraction of long-range connections. At a large coupling strength, the dynamical behavior of the neurons completely loses the resonance property with regard to the additive noise intensity or the multiplicative noise intensity, and the spiking regularity decreases considerably with the increase of the network topology randomness. The network topology randomness plays more of a depressed role than a favorable role in improving the temporal coherence of the spiking oscillation in the neuronal network research study.

  19. Liquid-Embedded Elastomer Electronics

    Science.gov (United States)

    Kramer, Rebecca; Majidi, Carmel; Park, Yong-Lae; Paik, Jamie; Wood, Robert

    2012-02-01

    Hyperelastic sensors are fabricated by embedding a silicone rubber film with microchannels of conductive liquid. In the case of soft tactile sensors, pressing the surface of the elastomer will deform the cross-section of underlying channels and change their electrical resistance. Soft pressure sensors may be employed in a variety of applications. For example, a network of pressure sensors can serve as artificial skin by yielding detailed information about contact pressures. This concept was demonstrated in a hyperelastic keypad, where perpendicular conductive channels form a quasi-planar network within an elastomeric matrix that registers the location, intensity and duration of applied pressure. In a second demonstration, soft curvature sensors were used for joint angle proprioception. Because the sensors are soft and stretchable, they conform to the host without interfering with the natural mechanics of motion. This marked the first use of liquid-embedded elastomer electronics to monitor human or robotic motion. Finally, liquid-embedded elastomers may be implemented as conductors in applications that call for flexible or stretchable circuitry, such as robotic origami.

  20. A Neuro-Inspired Spike-Based PID Motor Controller for Multi-Motor Robots with Low Cost FPGAs

    Directory of Open Access Journals (Sweden)

    Anton Civit-Balcells

    2012-03-01

    Full Text Available In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN, which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control.

  1. Predicting Spike Occurrence and Neuronal Responsiveness from LFPs in Primary Somatosensory Cortex

    Science.gov (United States)

    Storchi, Riccardo; Zippo, Antonio G.; Caramenti, Gian Carlo; Valente, Maurizio; Biella, Gabriele E. M.

    2012-01-01

    Local Field Potentials (LFPs) integrate multiple neuronal events like synaptic inputs and intracellular potentials. LFP spatiotemporal features are particularly relevant in view of their applications both in research (e.g. for understanding brain rhythms, inter-areal neural communication and neronal coding) and in the clinics (e.g. for improving invasive Brain-Machine Interface devices). However the relation between LFPs and spikes is complex and not fully understood. As spikes represent the fundamental currency of neuronal communication this gap in knowledge strongly limits our comprehension of neuronal phenomena underlying LFPs. We investigated the LFP-spike relation during tactile stimulation in primary somatosensory (S-I) cortex in the rat. First we quantified how reliably LFPs and spikes code for a stimulus occurrence. Then we used the information obtained from our analyses to design a predictive model for spike occurrence based on LFP inputs. The model was endowed with a flexible meta-structure whose exact form, both in parameters and structure, was estimated by using a multi-objective optimization strategy. Our method provided a set of nonlinear simple equations that maximized the match between models and true neurons in terms of spike timings and Peri Stimulus Time Histograms. We found that both LFPs and spikes can code for stimulus occurrence with millisecond precision, showing, however, high variability. Spike patterns were predicted significantly above chance for 75% of the neurons analysed. Crucially, the level of prediction accuracy depended on the reliability in coding for the stimulus occurrence. The best predictions were obtained when both spikes and LFPs were highly responsive to the stimuli. Spike reliability is known to depend on neuron intrinsic properties (i.e. on channel noise) and on spontaneous local network fluctuations. Our results suggest that the latter, measured through the LFP response variability, play a dominant role. PMID:22586452

  2. An enhanced radial basis function network for short-term electricity price forecasting

    International Nuclear Information System (INIS)

    Lin, Whei-Min; Gow, Hong-Jey; Tsai, Ming-Tang

    2010-01-01

    This paper proposed a price forecasting system for electric market participants to reduce the risk of price volatility. Combining the Radial Basis Function Network (RBFN) and Orthogonal Experimental Design (OED), an Enhanced Radial Basis Function Network (ERBFN) has been proposed for the solving process. The Locational Marginal Price (LMP), system load, transmission flow and temperature of the PJM system were collected and the data clusters were embedded in the Excel Database according to the year, season, workday and weekend. With the OED applied to learning rates in the ERBFN, the forecasting error can be reduced during the training process to improve both accuracy and reliability. This would mean that even the ''spikes'' could be tracked closely. The Back-propagation Neural Network (BPN), Probability Neural Network (PNN), other algorithms, and the proposed ERBFN were all developed and compared to check the performance. Simulation results demonstrated the effectiveness of the proposed ERBFN to provide quality information in a price volatile environment. (author)

  3. Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays

    National Research Council Canada - National Science Library

    Yang, Kyoung

    2005-01-01

    This final report summarizes the progress during the Phase I SBIR project entitled "Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays...

  4. Automatic spike sorting using tuning information.

    Science.gov (United States)

    Ventura, Valérie

    2009-09-01

    Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only.

  5. Costs and benefits of embedded generation

    International Nuclear Information System (INIS)

    1996-01-01

    Ilex Associates has been appointed by ETSU to examine the underlying costs and benefits of embedded generation (i.e. generation which is directly connected into a REF's distribution network instead of the national transmission network). The main impetus for this work stems from the need to understand the true value of embedded generation in time to review the development of further NFFO contracts following the expiry of the, so called, ''nuclear levy''. This is particularly important at this time because of the view expressed by the Director General of Electricity Supply (DGES) that to continue to receive support renewables technologies should be able to demonstrate good progress in converging towards a market price. The prime objectives of this study are to determine the costs and benefits of connecting and operating a generator that is embedded in a local Regional Electricity Company's (REC's) distribution network compared to the alternative option of providing electricity from a large generating station which is connected directly to the transmission system (and colloquially known as directly connected generation). (UK)

  6. Recognition of disturbances with specified morphology in time series. Part 1: Spikes on magnetograms of the worldwide INTERMAGNET network

    Science.gov (United States)

    Bogoutdinov, Sh. R.; Gvishiani, A. D.; Agayan, S. M.; Solovyev, A. A.; Kin, E.

    2010-11-01

    The International Real-time Magnetic Observatory Network (INTERMAGNET) is the world's biggest international network of ground-based observatories, providing geomagnetic data almost in real time (within 72 hours of collection) [Kerridge, 2001]. The observation data are rapidly transferred by the observatories participating in the program to regional Geomagnetic Information Nodes (GINs), which carry out a global exchange of data and process the results. The observations of the main (core) magnetic field of the Earth and its study are one of the key problems of geophysics. The INTERMAGNET system is the basis of monitoring the state of the Earth's magnetic field; therefore, the information provided by the system is required to be very reliable. Despite the rigid high-quality standard of the recording devices, they are subject to external effects that affect the quality of the records. Therefore, an objective and formalized recognition with the subsequent remedy of the anomalies (artifacts) that occur on the records is an important task. Expanding on the ideas of Agayan [Agayan et al., 2005] and Gvishiani [Gvishiani et al., 2008a; 2008b], this paper suggests a new algorithm of automatic recognition of anomalies with specified morphology, capable of identifying both physically- and anthropogenically-derived spikes on the magnetograms. The algorithm is constructed using fuzzy logic and, as such, is highly adaptive and universal. The developed algorithmic system formalizes the work of the expert-interpreter in terms of artificial intelligence. This ensures identical processing of large data arrays, almost unattainable manually. Besides the algorithm, the paper also reports on the application of the developed algorithmic system for identifying spikes at the INTERMAGNET observatories. The main achievement of the work is the creation of an algorithm permitting the almost unmanned extraction of spike-free (definitive) magnetograms from preliminary records. This automated

  7. Reliability of MEG source imaging of anterior temporal spikes: analysis of an intracranially characterized spike focus.

    Science.gov (United States)

    Wennberg, Richard; Cheyne, Douglas

    2014-05-01

    To assess the reliability of MEG source imaging (MSI) of anterior temporal spikes through detailed analysis of the localization and orientation of source solutions obtained for a large number of spikes that were separately confirmed by intracranial EEG to be focally generated within a single, well-characterized spike focus. MSI was performed on 64 identical right anterior temporal spikes from an anterolateral temporal neocortical spike focus. The effects of different volume conductors (sphere and realistic head model), removal of noise with low frequency filters (LFFs) and averaging multiple spikes were assessed in terms of the reliability of the source solutions. MSI of single spikes resulted in scattered dipole source solutions that showed reasonable reliability for localization at the lobar level, but only for solutions with a goodness-of-fit exceeding 80% using a LFF of 3 Hz. Reliability at a finer level of intralobar localization was limited. Spike averaging significantly improved the reliability of source solutions and averaging 8 or more spikes reduced dependency on goodness-of-fit and data filtering. MSI performed on topographically identical individual spikes from an intracranially defined classical anterior temporal lobe spike focus was limited by low reliability (i.e., scattered source solutions) in terms of fine, sublobar localization within the ipsilateral temporal lobe. Spike averaging significantly improved reliability. MSI performed on individual anterior temporal spikes is limited by low reliability. Reduction of background noise through spike averaging significantly improves the reliability of MSI solutions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Multiple coherence resonances and synchronization transitions by time delay in adaptive scale-free neuronal networks with spike-timing-dependent plasticity

    International Nuclear Information System (INIS)

    Xie, Huijuan; Gong, Yubing

    2017-01-01

    In this paper, we numerically study the effect of spike-timing-dependent plasticity (STDP) on multiple coherence resonances (MCR) and synchronization transitions (ST) induced by time delay in adaptive scale-free Hodgkin–Huxley neuronal networks. It is found that STDP has a big influence on MCR and ST induced by time delay and on the effect of network average degree on the MCR and ST. MCR is enhanced or suppressed as the adjusting rate A p of STDP decreases or increases, and there is optimal A p by which ST becomes strongest. As network average degree 〈k〉 increases, ST is enhanced and there is optimal 〈k〉 at which MCR becomes strongest. Moreover, for a larger A p value, ST is enhanced more rapidly with increasing 〈k〉 and the optimal 〈k〉 for MCR increases. These results show that STDP can either enhance or suppress MCR, and there is optimal STDP that can most strongly enhance ST induced by time delay in the adaptive neuronal networks. These findings could find potential implication for the information processing and transmission in neural systems.

  9. An Embedded System Dedicated to Intervehicle Communication Applications

    Directory of Open Access Journals (Sweden)

    Zhou Haiying

    2010-01-01

    Full Text Available To overcome system latency and network delay is essential for intervehicle communication (IVC applications such as hazard alarming and cooperative driving. This paper proposes a low-cost embedded software system dedicated to such applications. It consists of two basic component layers: an operating system, named HEROS (hybrid event-driven and real-time multitasking operating system, and a communication protocol, named CIVIC (Communication Inter Véhicule Intelligente et Coopérative. HEROS is originally designed for wireless sensor networks (WSNs. It contains a component-based resource-aware kernel and a low-latency tuple-based communication system. Moreover, it provides a configurable event-driven and/or real-time multitasking mechanism for various embedded applications. The CIVIC is an autoconfiguration cooperative IVC protocol. It merges proactive and reactive approaches to speed up and optimize location-based routing discovery with high-mobility nodes. Currently, this embedded system has been implemented and tested. The experiment results show that the new embedded system has low system latency and network delay under the principle of small resource consumption.

  10. Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.

    Science.gov (United States)

    Walter, Florian; Röhrbein, Florian; Knoll, Alois

    2015-12-01

    The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Mixed signal learning by spike correlation propagation in feedback inhibitory circuits.

    Directory of Open Access Journals (Sweden)

    Naoki Hiratani

    2015-04-01

    Full Text Available The brain can learn and detect mixed input signals masked by various types of noise, and spike-timing-dependent plasticity (STDP is the candidate synaptic level mechanism. Because sensory inputs typically have spike correlation, and local circuits have dense feedback connections, input spikes cause the propagation of spike correlation in lateral circuits; however, it is largely unknown how this secondary correlation generated by lateral circuits influences learning processes through STDP, or whether it is beneficial to achieve efficient spike-based learning from uncertain stimuli. To explore the answers to these questions, we construct models of feedforward networks with lateral inhibitory circuits and study how propagated correlation influences STDP learning, and what kind of learning algorithm such circuits achieve. We derive analytical conditions at which neurons detect minor signals with STDP, and show that depending on the origin of the noise, different correlation timescales are useful for learning. In particular, we show that non-precise spike correlation is beneficial for learning in the presence of cross-talk noise. We also show that by considering excitatory and inhibitory STDP at lateral connections, the circuit can acquire a lateral structure optimal for signal detection. In addition, we demonstrate that the model performs blind source separation in a manner similar to the sequential sampling approximation of the Bayesian independent component analysis algorithm. Our results provide a basic understanding of STDP learning in feedback circuits by integrating analyses from both dynamical systems and information theory.

  12. Brainlab: a Python toolkit to aid in the design, simulation, and analysis of spiking neural networks with the NeoCortical Simulator

    Directory of Open Access Journals (Sweden)

    Richard P Drewes

    2009-05-01

    Full Text Available Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading ``glue'' tool for managing all sorts of complex programmatictasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS environment in particular. Brainlab is an integrated model building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS (the NeoCortical Simulator.

  13. Fluctuations and information filtering in coupled populations of spiking neurons with adaptation.

    Science.gov (United States)

    Deger, Moritz; Schwalger, Tilo; Naud, Richard; Gerstner, Wulfram

    2014-12-01

    Finite-sized populations of spiking elements are fundamental to brain function but also are used in many areas of physics. Here we present a theory of the dynamics of finite-sized populations of spiking units, based on a quasirenewal description of neurons with adaptation. We derive an integral equation with colored noise that governs the stochastic dynamics of the population activity in response to time-dependent stimulation and calculate the spectral density in the asynchronous state. We show that systems of coupled populations with adaptation can generate a frequency band in which sensory information is preferentially encoded. The theory is applicable to fully as well as randomly connected networks and to leaky integrate-and-fire as well as to generalized spiking neurons with adaptation on multiple time scales.

  14. Electricity market price spike analysis by a hybrid data model and feature selection technique

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)

  15. Spike rate and spike timing contributions to coding taste quality information in rat periphery

    Directory of Open Access Journals (Sweden)

    Vernon eLawhern

    2011-05-01

    Full Text Available There is emerging evidence that individual sensory neurons in the rodent brain rely on temporal features of the discharge pattern to code differences in taste quality information. In contrast, in-vestigations of individual sensory neurons in the periphery have focused on analysis of spike rate and mostly disregarded spike timing as a taste quality coding mechanism. The purpose of this work was to determine the contribution of spike timing to taste quality coding by rat geniculate ganglion neurons using computational methods that have been applied successfully in other sys-tems. We recorded the discharge patterns of narrowly-tuned and broadly-tuned neurons in the rat geniculate ganglion to representatives of the five basic taste qualities. We used mutual in-formation to determine significant responses and the van Rossum metric to characterize their temporal features. While our findings show that spike timing contributes a significant part of the message, spike rate contributes the largest portion of the message relayed by afferent neurons from rat fungiform taste buds to the brain. Thus, spike rate and spike timing together are more effective than spike rate alone in coding stimulus quality information to a single basic taste in the periphery for both narrowly-tuned specialist and broadly-tuned generalist neurons.

  16. Impact of spike train autostructure on probability distribution of joint spike events.

    Science.gov (United States)

    Pipa, Gordon; Grün, Sonja; van Vreeswijk, Carl

    2013-05-01

    The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor (FFc). In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FFc depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation CV. Second, the dependence of the FFc on the CV is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.

  17. Magnetic Tunnel Junction Based Long-Term Short-Term Stochastic Synapse for a Spiking Neural Network with On-Chip STDP Learning

    Science.gov (United States)

    Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik

    2016-07-01

    Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.

  18. Simulating synchronization in neuronal networks

    Science.gov (United States)

    Fink, Christian G.

    2016-06-01

    We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.

  19. Nicotine-Mediated ADP to Spike Transition: Double Spiking in Septal Neurons.

    Science.gov (United States)

    Kodirov, Sodikdjon A; Wehrmeister, Michael; Colom, Luis

    2016-04-01

    The majority of neurons in lateral septum (LS) are electrically silent at resting membrane potential. Nicotine transiently excites a subset of neurons and occasionally leads to long lasting bursting activity upon longer applications. We have observed simultaneous changes in frequencies and amplitudes of spontaneous action potentials (AP) in the presence of nicotine. During the prolonged exposure, nicotine increased numbers of spikes within a burst. One of the hallmarks of nicotine effects was the occurrences of double spikes (known also as bursting). Alignment of 51 spontaneous spikes, triggered upon continuous application of nicotine, revealed that the slope of after-depolarizing potential gradually increased (1.4 vs. 3 mV/ms) and neuron fired the second AP, termed as double spiking. A transition from a single AP to double spikes increased the amplitude of after-hyperpolarizing potential. The amplitude of the second (premature) AP was smaller compared to the first one, and this correlation persisted in regard to their duration (half-width). A similar bursting activity in the presence of nicotine, to our knowledge, has not been reported previously in the septal structure in general and in LS in particular.

  20. Convolutional networks for fast, energy-efficient neuromorphic computing

    Science.gov (United States)

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  1. Runtime reconfiguration in networked embedded systems design and testing practices

    CERN Document Server

    Exarchakos, George

    2016-01-01

    This book focuses on the design and testing of large-scale, distributed signal processing systems, with a special emphasis on systems architecture, tooling and best practices. Architecture modeling, model checking, model-based evaluation and model-based design optimization occupy central roles. Target systems with resource constraints on processing, communication or energy supply require non-trivial methodologies to model their non-functional requirements, such as timeliness, robustness, lifetime and “evolution” capacity. Besides the theoretical foundations of the methodology, an engineering process and toolchain are described. Real-world cases illustrate the theory and practice tested by the authors in the course of the European project ARTEMIS DEMANES. The book can be used as a “cookbook” for designers and practitioners working with complex embedded systems like sensor networks for the structural integrity monitoring of steel bridges, and distributed micro-climate control systems for greenhouses and...

  2. Cellular and circuit mechanisms maintain low spike co-variability and enhance population coding in somatosensory cortex

    Directory of Open Access Journals (Sweden)

    Cheng eLy

    2012-03-01

    Full Text Available The responses of cortical neurons are highly variable across repeated presentations of a stimulus. Understanding this variability is critical for theories of both sensory and motor processing, since response variance affects the accuracy of neural codes. Despite this influence, the cellular and circuit mechanisms that shape the trial-to-trial variability of population responses remain poorly understood. We used a combination of experimental and computational techniques to uncover the mechanisms underlying response variability of populations of pyramidal (E cells in layer 2/3 of rat whisker barrel cortex. Spike trains recorded from pairs of E-cells during either spontaneous activity or whisker deflected responses show similarly low levels of spiking co-variability, despite large differences in network activation between the two states. We developed network models that show how spike threshold nonlinearities dilutes E-cell spiking co-variability during spontaneous activity and low velocity whisker deflections. In contrast, during high velocity whisker deflections, cancelation mechanisms mediated by feedforward inhibition maintain low E-cell pairwise co-variability. Thus, the combination of these two mechanisms ensure low E-cell population variability over a wide range of whisker deflection velocities. Finally, we show how this active decorrelation of population variability leads to a drastic increase in the population information about whisker velocity. The canonical cellular and circuit components of our study suggest that low network variability over a broad range of neural states may generalize across the nervous system.

  3. Determination of the cell tropism of serotype 1 feline infectious peritonitis virus using the spike affinity histochemistry in paraffin-embedded tissues.

    Science.gov (United States)

    Cham, Tat-Chuan; Chang, Yen-Chen; Tsai, Pei-Shiue; Wu, Ching-Ho; Chen, Hui-Wen; Jeng, Chian-Ren; Pang, Victor Fei; Chang, Hui-Wen

    2017-08-01

    Unlike for serotype II feline coronaviruses (FCoV II), the cellular receptor for serotype I FCoV (FCoV I), the most prevalent FCoV serotype, is unknown. To provide a platform for assessing the pattern by which FCoV I attaches to its host receptor(s), HEK293 cell lines that stably express the ectodomains of the spike (S) proteins derived from a FCoV I feline enteric coronavirus strain UU7 (FECV UU7) and a feline infectious peritonitis virus strain UU4 (FIPV UU4) were established. Using the recombinant S proteins as probes to perform S protein affinity histochemistry in paraffin-embedded tissues, although no tissue or enteric binding of FECV UU7 S protein was detected, it was found that by immunohistochemistry that the tissue distribution of FIPV UU4 S protein-bound cells correlated with that of FIPV antigen-positive cells and lesions associated with FIP and that the affinity binding of FIPV UU4 S protein on macrophages was not affected by enzymatic removal of host cell-surface sialic acid with neuraminidase. These findings suggest that a factor(s) other than sialic acid contribute(s) to the macrophage tropism of FIPV strain UU4. This approach allowed obtaining more information about both virus-host cell interactions and the biological characteristics of the unidentified cellular receptor for FCoV I. © 2017 The Societies and John Wiley & Sons Australia, Ltd.

  4. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  5. Two-point resistance of a resistor network embedded on a globe.

    Science.gov (United States)

    Tan, Zhi-Zhong; Essam, J W; Wu, F Y

    2014-07-01

    We consider the problem of two-point resistance in an (m-1) × n resistor network embedded on a globe, a geometry topologically equivalent to an m × n cobweb with its boundary collapsed into one single point. We deduce a concise formula for the resistance between any two nodes on the globe using a method of direct summation pioneered by one of us [Z.-Z. Tan, L. Zhou, and J. H. Yang, J. Phys. A: Math. Theor. 46, 195202 (2013)]. This method is contrasted with the Laplacian matrix approach formulated also by one of us [F. Y. Wu, J. Phys. A: Math. Gen. 37, 6653 (2004)], which is difficult to apply to the geometry of a globe. Our analysis gives the result in the form of a single summation.

  6. Computational modeling of distinct neocortical oscillations driven by cell-type selective optogenetic drive: Separable resonant circuits controlled by low-threshold spiking and fast-spiking interneurons

    Directory of Open Access Journals (Sweden)

    Dorea Vierling-Claassen

    2010-11-01

    Full Text Available Selective optogenetic drive of fast spiking interneurons (FS leads to enhanced local field potential (LFP power across the traditional gamma frequency band (20-80Hz; Cardin et al., 2009. In contrast, drive to regular-spiking pyramidal cells (RS enhances power at lower frequencies, with a peak at 8 Hz. The first result is consistent with previous computational studies emphasizing the role of FS and the time constant of GABAA synaptic inhibition in gamma rhythmicity. However, the same theoretical models do not typically predict low-frequency LFP enhancement with RS drive. To develop hypotheses as to how the same network can support these contrasting behaviors, we constructed a biophysically principled network model of primary somatosensory neocortex containing FS, RS and low-threshold-spiking (LTS interneurons. Cells were modeled with detailed cell anatomy and physiology, multiple dendritic compartments, and included active somatic and dendritic ionic currents. Consistent with prior studies, the model demonstrated gamma resonance during FS drive, dependent on the time-constant of GABAA inhibition induced by synchronous FS activity. Lower frequency enhancement during RS drive was replicated only on inclusion of an inhibitory LTS population, whose activation was critically dependent on RS synchrony and evoked longer-lasting inhibition. Our results predict that differential recruitment of FS and LTS inhibitory populations is essential to the observed cortical dynamics and may provide a means for amplifying the natural expression of distinct oscillations in normal cortical processing.

  7. Graph embedding with rich information through heterogeneous graph

    KAUST Repository

    Sun, Guolei

    2017-11-12

    Graph embedding, aiming to learn low-dimensional representations for nodes in graphs, has attracted increasing attention due to its critical application including node classification, link prediction and clustering in social network analysis. Most existing algorithms for graph embedding only rely on the topology information and fail to use the copious information in nodes as well as edges. As a result, their performance for many tasks may not be satisfactory. In this thesis, we proposed a novel and general framework for graph embedding with rich text information (GERI) through constructing a heterogeneous network, in which we integrate node and edge content information with graph topology. Specially, we designed a novel biased random walk to explore the constructed heterogeneous network with the notion of flexible neighborhood. Our sampling strategy can compromise between BFS and DFS local search on heterogeneous graph. To further improve our algorithm, we proposed semi-supervised GERI (SGERI), which learns graph embedding in an discriminative manner through heterogeneous network with label information. The efficacy of our method is demonstrated by extensive comparison experiments with 9 baselines over multi-label and multi-class classification on various datasets including Citeseer, Cora, DBLP and Wiki. It shows that GERI improves the Micro-F1 and Macro-F1 of node classification up to 10%, and SGERI improves GERI by 5% in Wiki.

  8. The embedded operating system project

    Science.gov (United States)

    Campbell, R. H.

    1984-01-01

    This progress report describes research towards the design and construction of embedded operating systems for real-time advanced aerospace applications. The applications concerned require reliable operating system support that must accommodate networks of computers. The report addresses the problems of constructing such operating systems, the communications media, reconfiguration, consistency and recovery in a distributed system, and the issues of realtime processing. A discussion is included on suitable theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based systems. In particular, this report addresses: atomic actions, fault tolerance, operating system structure, program development, reliability and availability, and networking issues. This document reports the status of various experiments designed and conducted to investigate embedded operating system design issues.

  9. An efficient automated parameter tuning framework for spiking neural networks.

    Science.gov (United States)

    Carlson, Kristofor D; Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L

    2014-01-01

    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.

  10. SPECT detector system design based on embedded system

    International Nuclear Information System (INIS)

    Zhang Weizheng; Zhao Shujun; Zhang Lei; Sun Yuanling

    2007-01-01

    A single-photon emission computed tomography detector system based on embedded Linux designed. This system is composed of detector module, data acquisition module, ARM MPU module, network interface communication module and human machine interface module. Its software uses multithreading technology based on embedded Linux. It can achieve high speed data acquisition, real-time data correction and network data communication. It can accelerate the data acquisition and decrease the dead time. The accuracy and the stability of the system can be improved. (authors)

  11. Pixels to Graphs by Associative Embedding

    KAUST Repository

    Newell, Alejandro; Deng, Jia

    2017-01-01

    network such that it takes in an input image and produces a full graph. This is done end-to-end in a single stage with the use of associative embeddings. The network learns to simultaneously identify all of the elements that make up a graph and piece them

  12. Spike: Artificial intelligence scheduling for Hubble space telescope

    Science.gov (United States)

    Johnston, Mark; Miller, Glenn; Sponsler, Jeff; Vick, Shon; Jackson, Robert

    1990-01-01

    Efficient utilization of spacecraft resources is essential, but the accompanying scheduling problems are often computationally intractable and are difficult to approximate because of the presence of numerous interacting constraints. Artificial intelligence techniques were applied to the scheduling of the NASA/ESA Hubble Space Telescope (HST). This presents a particularly challenging problem since a yearlong observing program can contain some tens of thousands of exposures which are subject to a large number of scientific, operational, spacecraft, and environmental constraints. New techniques were developed for machine reasoning about scheduling constraints and goals, especially in cases where uncertainty is an important scheduling consideration and where resolving conflicts among conflicting preferences is essential. These technique were utilized in a set of workstation based scheduling tools (Spike) for HST. Graphical displays of activities, constraints, and schedules are an important feature of the system. High level scheduling strategies using both rule based and neural network approaches were developed. While the specific constraints implemented are those most relevant to HST, the framework developed is far more general and could easily handle other kinds of scheduling problems. The concept and implementation of the Spike system are described along with some experiments in adapting Spike to other spacecraft scheduling domains.

  13. Modern Embedded Computing Designing Connected, Pervasive, Media-Rich Systems

    CERN Document Server

    Barry, Peter

    2012-01-01

    Modern embedded systems are used for connected, media-rich, and highly integrated handheld devices such as mobile phones, digital cameras, and MP3 players. All of these embedded systems require networking, graphic user interfaces, and integration with PCs, as opposed to traditional embedded processors that can perform only limited functions for industrial applications. While most books focus on these controllers, Modern Embedded Computing provides a thorough understanding of the platform architecture of modern embedded computing systems that drive mobile devices. The book offers a comprehen

  14. GABAergic activities control spike timing- and frequency-dependent long-term depression at hippocampal excitatory synapses

    Directory of Open Access Journals (Sweden)

    Makoto Nishiyama

    2010-06-01

    Full Text Available GABAergic interneuronal network activities in the hippocampus control a variety of neural functions, including learning and memory, by regulating θ and γ oscillations. How these GABAergic activities at pre- and post-synaptic sites of hippocampal CA1 pyramidal cells differentially contribute to synaptic function and plasticity during their repetitive pre- and post-synaptic spiking at θ and γ oscillations is largely unknown. We show here that activities mediated by postsynaptic GABAARs and presynaptic GABABRs determine, respectively, the spike timing- and frequency-dependence of activity-induced synaptic modifications at Schaffer collateral-CA1 excitatory synapses. We demonstrate that both feedforward and feedback GABAAR-mediated inhibition in the postsynaptic cell controls the spike timing-dependent long-term depression of excitatory inputs (“e-LTD” at the θ frequency. We also show that feedback postsynaptic inhibition specifically causes e-LTD of inputs that induce small postsynaptic currents (<70 pA with LTP timing, thus enforcing the requirement of cooperativity for induction of long-term potentiation at excitatory inputs (“e-LTP”. Furthermore, under spike-timing protocols that induce e-LTP and e-LTD at excitatory synapses, we observed parallel induction of LTP and LTD at inhibitory inputs (“i-LTP” and “i-LTD” to the same postsynaptic cells. Finally, we show that presynaptic GABABR-mediated inhibition plays a major role in the induction of frequency-dependent e-LTD at α and β frequencies. These observations demonstrate the critical influence of GABAergic interneuronal network activities in regulating the spike timing and frequency dependences of long-term synaptic modifications in the hippocampus.

  15. A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks.

    Science.gov (United States)

    Zhang, Xu; Foderaro, Greg; Henriquez, Craig; Ferrari, Silvia

    2018-03-01

    Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.

  16. Rebound spiking in layer II medial entorhinal cortex stellate cells: Possible mechanism of grid cell function

    Science.gov (United States)

    Shay, Christopher F.; Ferrante, Michele; Chapman, G. William; Hasselmo, Michael E.

    2015-01-01

    Rebound spiking properties of medial entorhinal cortex (mEC) stellate cells induced by inhibition may underlie their functional properties in awake behaving rats, including the temporal phase separation of distinct grid cells and differences in grid cell firing properties. We investigated rebound spiking properties using whole cell patch recording in entorhinal slices, holding cells near spiking threshold and delivering sinusoidal inputs, superimposed with realistic inhibitory synaptic inputs to test the capacity of cells to selectively respond to specific phases of inhibitory input. Stellate cells showed a specific phase range of hyperpolarizing inputs that elicited spiking, but non-stellate cells did not show phase specificity. In both cell types, the phase range of spiking output occurred between the peak and subsequent descending zero crossing of the sinusoid. The phases of inhibitory inputs that induced spikes shifted earlier as the baseline sinusoid frequency increased, while spiking output shifted to later phases. Increases in magnitude of the inhibitory inputs shifted the spiking output to earlier phases. Pharmacological blockade of h-current abolished the phase selectivity of hyperpolarizing inputs eliciting spikes. A network computational model using cells possessing similar rebound properties as found in vitro produces spatially periodic firing properties resembling grid cell firing when a simulated animal moves along a linear track. These results suggest that the ability of mEC stellate cells to fire rebound spikes in response to a specific range of phases of inhibition could support complex attractor dynamics that provide completion and separation to maintain spiking activity of specific grid cell populations. PMID:26385258

  17. Information transmission with spiking Bayesian neurons

    International Nuclear Information System (INIS)

    Lochmann, Timm; Deneve, Sophie

    2008-01-01

    Spike trains of cortical neurons resulting from repeatedpresentations of a stimulus are variable and exhibit Poisson-like statistics. Many models of neural coding therefore assumed that sensory information is contained in instantaneous firing rates, not spike times. Here, we ask how much information about time-varying stimuli can be transmitted by spiking neurons with such input and output variability. In particular, does this variability imply spike generation to be intrinsically stochastic? We consider a model neuron that estimates optimally the current state of a time-varying binary variable (e.g. presence of a stimulus) by integrating incoming spikes. The unit signals its current estimate to other units with spikes whenever the estimate increased by a fixed amount. As shown previously, this computation results in integrate and fire dynamics with Poisson-like output spike trains. This output variability is entirely due to the stochastic input rather than noisy spike generation. As a result such a deterministic neuron can transmit most of the information about the time varying stimulus. This contrasts with a standard model of sensory neurons, the linear-nonlinear Poisson (LNP) model which assumes that most variability in output spike trains is due to stochastic spike generation. Although it yields the same firing statistics, we found that such noisy firing results in the loss of most information. Finally, we use this framework to compare potential effects of top-down attention versus bottom-up saliency on information transfer with spiking neurons

  18. Characterization of reliability of spike timing in spinal interneurons during oscillating inputs

    DEFF Research Database (Denmark)

    Beierholm, Ulrik; Nielsen, Carsten D.; Ryge, Jesper

    2001-01-01

    that interneurons can respond with a high reliability of spike timing, but only by combining fast and slow oscillations is it possible to obtain a high reliability of firing during rhythmic locomotor movements. Theoretical analysis of the rotation number provided new insights into the mechanism for obtaining......The spike timing in rhythmically active interneurons in the mammalian spinal locomotor network varies from cycle to cycle. We tested the contribution from passive membrane properties to this variable firing pattern, by measuring the reliability of spike timing, P, in interneurons in the isolated...... the analysis we used a leaky integrate and fire (LIF) model with a noise term added. The LIF model was able to reproduce the experimentally observed properties of P as well as the low-pass properties of the membrane. The LIF model enabled us to use the mathematical theory of nonlinear oscillators to analyze...

  19. Phenomenological understanding of dewetting and embedding of noble metal nanoparticles in thin films induced by ion irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Prakash, Jai, E-mail: jai.gupta1983@gmail.com [Department of Chemistry, MMH College (Ch. Charan Singh University Meerut), Ghaiziabad 201001 (India); Chemical Physics of Materials, Université Libre de Bruxelles, Campus de la Plaine, CP 243, B-1050 Bruxelles (Belgium); Tripathi, A. [Inter University Accelerator Centre, Aruna Asif Ali Marg, New Delhi 110067 (India); Gautam, Sanjeev; Chae, K.H.; Song, Jonghan [Advanced Analysis Center, Korea Institute of Science and Technology, Seoul 136–791 (Korea, Republic of); Rigato, V. [INFN Laboratori Nazionali di Legnaro, Via Romea. 4, 35020 Legnaro, Padova (Italy); Tripathi, Jalaj [Department of Chemistry, MMH College (Ch. Charan Singh University Meerut), Ghaiziabad 201001 (India); Asokan, K. [Inter University Accelerator Centre, Aruna Asif Ali Marg, New Delhi 110067 (India)

    2014-10-15

    The present experimental work provides the phenomenological approach to understand the dewetting in thin noble metal films with subsequent formation of nanoparticles (NPs) and embedding of NPs induced by ion irradiation. Au/polyethyleneterepthlate (PET) bilayers were irradiated with 150 keV Ar ions at varying fluences and were studied using scanning electron microscopy (SEM) and cross-sectional transmission electron microscopy (X-TEM). Thin Au film begins to dewet from the substrate after irradiation and subsequent irradiation results in spherical nanoparticles on the surface that at a fluence of 5 × 10{sup 16} ions/cm{sup 2} become embedded into the substrate. In addition to dewetting in thin films, synthesis and embedding of metal NPs by ion irradiation, the present article explores fundamental thermodynamic principles that govern these events systematically under the effect of irradiation. The results are explained on the basis of ion induced sputtering, thermal spike inducing local melting and of thermodynamic driving forces by minimization of the system free energy where contributions of surface and interfacial energies are considered with subsequent ion induced viscous flow in substrate. - Highlights: • Phenomenological interpretation of dewetting and embedding of metal NPs in thin film. • Exploring fundamental thermodynamic principles under influence of ion irradiation. • Ion induced surface/interface microstructural changes using SEM/X-TEM. • Ion induced sputtering, thermal spike induced local melting. • Thermodynamic driving forces relate to surface and interfacial energies.

  20. Memory recall and spike-frequency adaptation

    Science.gov (United States)

    Roach, James P.; Sander, Leonard M.; Zochowski, Michal R.

    2016-05-01

    The brain can reproduce memories from partial data; this ability is critical for memory recall. The process of memory recall has been studied using autoassociative networks such as the Hopfield model. This kind of model reliably converges to stored patterns that contain the memory. However, it is unclear how the behavior is controlled by the brain so that after convergence to one configuration, it can proceed with recognition of another one. In the Hopfield model, this happens only through unrealistic changes of an effective global temperature that destabilizes all stored configurations. Here we show that spike-frequency adaptation (SFA), a common mechanism affecting neuron activation in the brain, can provide state-dependent control of pattern retrieval. We demonstrate this in a Hopfield network modified to include SFA, and also in a model network of biophysical neurons. In both cases, SFA allows for selective stabilization of attractors with different basins of attraction, and also for temporal dynamics of attractor switching that is not possible in standard autoassociative schemes. The dynamics of our models give a plausible account of different sorts of memory retrieval.

  1. Controlling market power and price spikes in electricity networks: Demand-side bidding.

    Science.gov (United States)

    Rassenti, Stephen J; Smith, Vernon L; Wilson, Bart J

    2003-03-04

    In this article we report an experiment that examines how demand-side bidding can discipline generators in a market for electric power. First we develop a treatment without demand-side bidding; two large firms are allocated baseload and intermediate cost generators such that either firm might unilaterally withhold the capacity of its intermediate cost generators from the market to benefit from the supracompetitive prices that would result from only selling its baseload units. In a converse treatment, ownership of some of the intermediate cost generators is transferred from each of these firms to two other firms such that no one firm could unilaterally restrict output to spawn supracompetitive prices. Having established a well controlled data set with price spikes paralleling those observed in the naturally occurring economy, we also extend the design to include demand-side bidding. We find that demand-side bidding completely neutralizes the exercise of market power and eliminates price spikes even in the presence of structural market power.

  2. Barbed micro-spikes for micro-scale biopsy

    Science.gov (United States)

    Byun, Sangwon; Lim, Jung-Min; Paik, Seung-Joon; Lee, Ahra; Koo, Kyo-in; Park, Sunkil; Park, Jaehong; Choi, Byoung-Doo; Seo, Jong Mo; Kim, Kyung-ah; Chung, Hum; Song, Si Young; Jeon, Doyoung; Cho, Dongil

    2005-06-01

    Single-crystal silicon planar micro-spikes with protruding barbs are developed for micro-scale biopsy and the feasibility of using the micro-spike as a micro-scale biopsy tool is evaluated for the first time. The fabrication process utilizes a deep silicon etch to define the micro-spike outline, resulting in protruding barbs of various shapes. Shanks of the fabricated micro-spikes are 3 mm long, 100 µm thick and 250 µm wide. Barbs protruding from micro-spike shanks facilitate the biopsy procedure by tearing off and retaining samples from target tissues. Micro-spikes with barbs successfully extracted tissue samples from the small intestines of the anesthetized pig, whereas micro-spikes without barbs failed to obtain a biopsy sample. Parylene coating can be applied to improve the biocompatibility of the micro-spike without deteriorating the biopsy function of the micro-spike. In addition, to show that the biopsy with the micro-spike can be applied to tissue analysis, samples obtained by micro-spikes were examined using immunofluorescent staining. Nuclei and F-actin of cells which are extracted by the micro-spike from a transwell were clearly visualized by immunofluorescent staining.

  3. Spike train generation and current-to-frequency conversion in silicon diodes

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A device physics model is developed to analyze spontaneous neuron-like spike train generation in current driven silicon p(+)-n-n(+) devices in cryogenic environments. The model is shown to explain the very high dynamic range (0 to the 7th) current-to-frequency conversion and experimental features of the spike train frequency as a function of input current. The devices are interesting components for implementation of parallel asynchronous processing adjacent to cryogenically cooled focal planes because of their extremely low current and power requirements, their electronic simplicity, and their pulse coding capability, and could be used to form the hardware basis for neural networks which employ biologically plausible means of information coding.

  4. Silver nanoparticles embedded in amine-functionalized silicate sol–gel network assembly for sensing cysteine, adenosine and NADH

    International Nuclear Information System (INIS)

    Maduraiveeran, Govindhan; Ramaraj, Ramasamy

    2011-01-01

    Silver nanoparticles embedded in amine-functionalized silicate sol–gel network were synthesized and used for sensing biomolecules such as cysteine, adenosine, and β-nicotinamide adenine dinucleotide (NADH). The sensing of these biomolecules by the assembly of silver nanoparticles was triggered by the optical response of the surface plasmon resonance (SPR) of the silver nanoparticles. The optical sensor exhibited the lowest detection limit (LOD) of 5, 20, and 5 μM for cysteine, adenosine, and NADH, respectively. The sensing of biomolecules in the micromolar range by using the amine-functionalized silicate sol–gel embedded silver nanoparticles was studied in the presence of interference molecules like uridine, glycine, guanine, and guanosine. Thus, the present approach might open up a new avenue for the development of silver nanoparticles-based optical sensor devices for biomolecules.

  5. Steganographic embedding in containers-images

    Science.gov (United States)

    Nikishova, A. V.; Omelchenko, T. A.; Makedonskij, S. A.

    2018-05-01

    Steganography is one of the approaches to ensuring the protection of information transmitted over the network. But a steganographic method should vary depending on a used container. According to statistics, the most widely used containers are images and the most common image format is JPEG. Authors propose a method of data embedding into a frequency area of images in format JPEG 2000. It is proposed to use the method of Benham-Memon- Yeo-Yeung, in which instead of discrete cosine transform, discrete wavelet transform is used. Two requirements for images are formulated. Structure similarity is chosen to obtain quality assessment of data embedding. Experiments confirm that requirements satisfaction allows achieving high quality assessment of data embedding.

  6. Network and neuronal membrane properties in hybrid networks reciprocally regulate selectivity to rapid thalamocortical inputs.

    Science.gov (United States)

    Pesavento, Michael J; Pinto, David J

    2012-11-01

    Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.

  7. Real-time embedded face recognition for smart home

    NARCIS (Netherlands)

    Zuo, F.; With, de P.H.N.

    2005-01-01

    We propose a near real-time face recognition system for embedding in consumer applications. The system is embedded in a networked home environment and enables personalized services by automatic identification of users. The aim of our research is to design and build a face recognition system that is

  8. Identification of spikes associated with local sources in continuous time series of atmospheric CO, CO2 and CH4

    Science.gov (United States)

    El Yazidi, Abdelhadi; Ramonet, Michel; Ciais, Philippe; Broquet, Gregoire; Pison, Isabelle; Abbaris, Amara; Brunner, Dominik; Conil, Sebastien; Delmotte, Marc; Gheusi, Francois; Guerin, Frederic; Hazan, Lynn; Kachroudi, Nesrine; Kouvarakis, Giorgos; Mihalopoulos, Nikolaos; Rivier, Leonard; Serça, Dominique

    2018-03-01

    This study deals with the problem of identifying atmospheric data influenced by local emissions that can result in spikes in time series of greenhouse gases and long-lived tracer measurements. We considered three spike detection methods known as coefficient of variation (COV), robust extraction of baseline signal (REBS) and standard deviation of the background (SD) to detect and filter positive spikes in continuous greenhouse gas time series from four monitoring stations representative of the European ICOS (Integrated Carbon Observation System) Research Infrastructure network. The results of the different methods are compared to each other and against a manual detection performed by station managers. Four stations were selected as test cases to apply the spike detection methods: a continental rural tower of 100 m height in eastern France (OPE), a high-mountain observatory in the south-west of France (PDM), a regional marine background site in Crete (FKL) and a marine clean-air background site in the Southern Hemisphere on Amsterdam Island (AMS). This selection allows us to address spike detection problems in time series with different variability. Two years of continuous measurements of CO2, CH4 and CO were analysed. All methods were found to be able to detect short-term spikes (lasting from a few seconds to a few minutes) in the time series. Analysis of the results of each method leads us to exclude the COV method due to the requirement to arbitrarily specify an a priori percentage of rejected data in the time series, which may over- or underestimate the actual number of spikes. The two other methods freely determine the number of spikes for a given set of parameters, and the values of these parameters were calibrated to provide the best match with spikes known to reflect local emissions episodes that are well documented by the station managers. More than 96 % of the spikes manually identified by station managers were successfully detected both in the SD and the

  9. Identification of spikes associated with local sources in continuous time series of atmospheric CO, CO2 and CH4

    Directory of Open Access Journals (Sweden)

    A. El Yazidi

    2018-03-01

    Full Text Available This study deals with the problem of identifying atmospheric data influenced by local emissions that can result in spikes in time series of greenhouse gases and long-lived tracer measurements. We considered three spike detection methods known as coefficient of variation (COV, robust extraction of baseline signal (REBS and standard deviation of the background (SD to detect and filter positive spikes in continuous greenhouse gas time series from four monitoring stations representative of the European ICOS (Integrated Carbon Observation System Research Infrastructure network. The results of the different methods are compared to each other and against a manual detection performed by station managers. Four stations were selected as test cases to apply the spike detection methods: a continental rural tower of 100 m height in eastern France (OPE, a high-mountain observatory in the south-west of France (PDM, a regional marine background site in Crete (FKL and a marine clean-air background site in the Southern Hemisphere on Amsterdam Island (AMS. This selection allows us to address spike detection problems in time series with different variability. Two years of continuous measurements of CO2, CH4 and CO were analysed. All methods were found to be able to detect short-term spikes (lasting from a few seconds to a few minutes in the time series. Analysis of the results of each method leads us to exclude the COV method due to the requirement to arbitrarily specify an a priori percentage of rejected data in the time series, which may over- or underestimate the actual number of spikes. The two other methods freely determine the number of spikes for a given set of parameters, and the values of these parameters were calibrated to provide the best match with spikes known to reflect local emissions episodes that are well documented by the station managers. More than 96 % of the spikes manually identified by station managers were successfully detected both in

  10. A Frank mixture copula family for modeling higher-order correlations of neural spike counts

    International Nuclear Information System (INIS)

    Onken, Arno; Obermayer, Klaus

    2009-01-01

    In order to evaluate the importance of higher-order correlations in neural spike count codes, flexible statistical models of dependent multivariate spike counts are required. Copula families, parametric multivariate distributions that represent dependencies, can be applied to construct such models. We introduce the Frank mixture family as a new copula family that has separate parameters for all pairwise and higher-order correlations. In contrast to the Farlie-Gumbel-Morgenstern copula family that shares this property, the Frank mixture copula can model strong correlations. We apply spike count models based on the Frank mixture copula to data generated by a network of leaky integrate-and-fire neurons and compare the goodness of fit to distributions based on the Farlie-Gumbel-Morgenstern family. Finally, we evaluate the importance of using proper single neuron spike count distributions on the Shannon information. We find notable deviations in the entropy that increase with decreasing firing rates. Moreover, we find that the Frank mixture family increases the log likelihood of the fit significantly compared to the Farlie-Gumbel-Morgenstern family. This shows that the Frank mixture copula is a useful tool to assess the importance of higher-order correlations in spike count codes.

  11. Bayesian population decoding of spiking neurons.

    Science.gov (United States)

    Gerwinn, Sebastian; Macke, Jakob; Bethge, Matthias

    2009-01-01

    The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a 'spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.

  12. Bayesian population decoding of spiking neurons

    Directory of Open Access Journals (Sweden)

    Sebastian Gerwinn

    2009-10-01

    Full Text Available The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a `spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.

  13. Heterogeneity of Purkinje cell simple spike-complex spike interactions: zebrin- and non-zebrin-related variations.

    Science.gov (United States)

    Tang, Tianyu; Xiao, Jianqiang; Suh, Colleen Y; Burroughs, Amelia; Cerminara, Nadia L; Jia, Linjia; Marshall, Sarah P; Wise, Andrew K; Apps, Richard; Sugihara, Izumi; Lang, Eric J

    2017-08-01

    Cerebellar Purkinje cells (PCs) generate two types of action potentials, simple and complex spikes. Although they are generated by distinct mechanisms, interactions between the two spike types exist. Zebrin staining produces alternating positive and negative stripes of PCs across most of the cerebellar cortex. Thus, here we compared simple spike-complex spike interactions both within and across zebrin populations. Simple spike activity undergoes a complex modulation preceding and following a complex spike. The amplitudes of the pre- and post-complex spike modulation phases were correlated across PCs. On average, the modulation was larger for PCs in zebrin positive regions. Correlations between aspects of the complex spike waveform and simple spike activity were found, some of which varied between zebrin positive and negative PCs. The implications of the results are discussed with regard to hypotheses that complex spikes are triggered by rises in simple spike activity for either motor learning or homeostatic functions. Purkinje cells (PCs) generate two types of action potentials, called simple and complex spikes (SSs and CSs). We first investigated the CS-associated modulation of SS activity and its relationship to the zebrin status of the PC. The modulation pattern consisted of a pre-CS rise in SS activity, and then, following the CS, a pause, a rebound, and finally a late inhibition of SS activity for both zebrin positive (Z+) and negative (Z-) cells, though the amplitudes of the phases were larger in Z+ cells. Moreover, the amplitudes of the pre-CS rise with the late inhibitory phase of the modulation were correlated across PCs. In contrast, correlations between modulation phases across CSs of individual PCs were generally weak. Next, the relationship between CS spikelets and SS activity was investigated. The number of spikelets/CS correlated with the average SS firing rate only for Z+ cells. In contrast, correlations across CSs between spikelet numbers and the

  14. Embedded Systems for Smart Appliances and Energy Management

    CERN Document Server

    Neumann, Peter; Mahlknecht, Stefan

    2013-01-01

    This book provides a comprehensive introduction to embedded systems for smart appliances and energy management, bringing together for the first time a multidisciplinary blend of topics from embedded systems, information technology and power engineering.  Coverage includes challenges for future resource distribution grids, energy management in smart appliances, micro energy generation, demand response management, ultra-low power stand by, smart standby and communication networks in home and building automation.   Provides a comprehensive, multidisciplinary introduction to embedded systems for smart appliances and energy management; Equips researchers and engineers with information required to succeed in designing energy management for smart appliances; Includes coverage of resource distribution grids, energy management in smart appliances, micro energy generation, demand response management, ultra-low power stand by, smart standby and communication networks in home and building automation.  

  15. Storage of phase-coded patterns via STDP in fully-connected and sparse network: a study of the network capacity

    Directory of Open Access Journals (Sweden)

    Silvia Scarpetta

    2010-08-01

    Full Text Available We study the storage and retrieval of phase-coded patterns as stable dynamical attractors in recurrent neural networks, for both an analog and a integrate-and-fire spiking model. The synaptic strength is determined by a learning rule based on spike-time-dependent plasticity, with an asymmetric time window depending on the relative timing between pre- and post-synaptic activity. We store multiple patterns and study the network capacity. For the analog model, we find that the network capacity scales linearly with the network size, and that both capacity and the oscillation frequency of the retrieval state depend on the asymmetry of the learning time window. In addition to fully-connected networks, we study sparse networks, where each neuron is connected only to a small number $zll N$ of other neurons. Connections can be short range, between neighboring neurons placed on a regular lattice, or long range, between randomly chosen pairs of neurons. We find that a small fraction of long range connections is able to amplify the capacity of the network. This imply that a small-world-network topology is optimal, as a compromise between the cost of long range connections and the capacity increase. Also in the spiking integrate and fire model the crucial result of storing and retrieval of multiple phase-coded patterns is observed. The capacity of the fully-connected spiking network is investigated, together with the relation between oscillation frequency of retrieval state and window asymmetry.

  16. A distribution network review

    International Nuclear Information System (INIS)

    Fairbairn, R.J.; Maunder, D.; Kenyon, P.

    1999-01-01

    This report summarises the findings of a study reviewing the distribution network in England, Scotland and Wales to evaluate its ability to accommodate more embedded generation from both fossil fuel and renewable energy sources. The background to the study is traced, and descriptions of the existing electricity supply system, the licence conditions relating to embedded generation, and the effects of the Review of Electricity Trading Arrangements are given. The ability of the UK distribution networks to accept embedded generation is examined, and technical benefits/drawbacks arising from embedded generation, and the potential for uptake of embedded generation technologies are considered. The distribution network capacity and the potential uptake of embedded generation are compared, and possible solutions to overcome obstacles are suggested. (UK)

  17. A distribution network review

    Energy Technology Data Exchange (ETDEWEB)

    Fairbairn, R.J.; Maunder, D.; Kenyon, P.

    1999-07-01

    This report summarises the findings of a study reviewing the distribution network in England, Scotland and Wales to evaluate its ability to accommodate more embedded generation from both fossil fuel and renewable energy sources. The background to the study is traced, and descriptions of the existing electricity supply system, the licence conditions relating to embedded generation, and the effects of the Review of Electricity Trading Arrangements are given. The ability of the UK distribution networks to accept embedded generation is examined, and technical benefits/drawbacks arising from embedded generation, and the potential for uptake of embedded generation technologies are considered. The distribution network capacity and the potential uptake of embedded generation are compared, and possible solutions to overcome obstacles are suggested. (UK)

  18. Costs and benefits of embedded generation

    International Nuclear Information System (INIS)

    1999-11-01

    This project sought to evaluate the costs and benefits of embedded generation in the light of the UK government's consultation paper on the future of green generation, the government's aim to increase the levels of generation from renewable energy sources and cogeneration, the current Review of the Electricity Trading Arrangements, and the form of the Distribution Price Control. Definitions are given for embedded, centrally dispatched, and pooled generation, and licensed suppliers, and commercial and economic values. The commercial and economic value of embedded generation is examined in terms of generation prices, costs to electrical suppliers, losses (electrical, transmission, distribution), and effects on the national grid and distribution network. Diagrams showing the cost elements of trading through the Pool and the elements that are avoided by non-Pool embedded generator trading are presented

  19. Rural Embedded Assistants for Community Health (REACH) network: first-person accounts in a community-university partnership.

    Science.gov (United States)

    Brown, Louis D; Alter, Theodore R; Brown, Leigh Gordon; Corbin, Marilyn A; Flaherty-Craig, Claire; McPhail, Lindsay G; Nevel, Pauline; Shoop, Kimbra; Sterner, Glenn; Terndrup, Thomas E; Weaver, M Ellen

    2013-03-01

    Community research and action projects undertaken by community-university partnerships can lead to contextually appropriate and sustainable community improvements in rural and urban localities. However, effective implementation is challenging and prone to failure when poorly executed. The current paper seeks to inform rural community-university partnership practice through consideration of first-person accounts from five stakeholders in the Rural Embedded Assistants for Community Health (REACH) Network. The REACH Network is a unique community-university partnership aimed at improving rural health services by identifying, implementing, and evaluating innovative health interventions delivered by local caregivers. The first-person accounts provide an insider's perspective on the nature of collaboration. The unique perspectives identify three critical challenges facing the REACH Network: trust, coordination, and sustainability. Through consideration of the challenges, we identified several strategies for success. We hope readers can learn their own lessons when considering the details of our partnership's efforts to improve the delivery infrastructure for rural healthcare.

  20. Emergence of slow collective oscillations in neural networks with spike-timing dependent plasticity

    DEFF Research Database (Denmark)

    Mikkelsen, Kaare; Imparato, Alberto; Torcini, Alessandro

    2013-01-01

    The collective dynamics of excitatory pulse coupled neurons with spike timing dependent plasticity (STDP) is studied. The introduction of STDP induces persistent irregular oscillations between strongly and weakly synchronized states, reminiscent of brain activity during slow-wave sleep. We explain...

  1. Temporally coordinated spiking activity of human induced pluripotent stem cell-derived neurons co-cultured with astrocytes.

    Science.gov (United States)

    Kayama, Tasuku; Suzuki, Ikuro; Odawara, Aoi; Sasaki, Takuya; Ikegaya, Yuji

    2018-01-01

    In culture conditions, human induced-pluripotent stem cells (hiPSC)-derived neurons form synaptic connections with other cells and establish neuronal networks, which are expected to be an in vitro model system for drug discovery screening and toxicity testing. While early studies demonstrated effects of co-culture of hiPSC-derived neurons with astroglial cells on survival and maturation of hiPSC-derived neurons, the population spiking patterns of such hiPSC-derived neurons have not been fully characterized. In this study, we analyzed temporal spiking patterns of hiPSC-derived neurons recorded by a multi-electrode array system. We discovered that specific sets of hiPSC-derived neurons co-cultured with astrocytes showed more frequent and highly coherent non-random synchronized spike trains and more dynamic changes in overall spike patterns over time. These temporally coordinated spiking patterns are physiological signs of organized circuits of hiPSC-derived neurons and suggest benefits of co-culture of hiPSC-derived neurons with astrocytes. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Distributed Cerebellar Motor Learning; a Spike-Timing-Dependent Plasticity Model

    Directory of Open Access Journals (Sweden)

    Niceto Rafael Luque

    2016-03-01

    Full Text Available Deep cerebellar nuclei neurons receive both inhibitory (GABAergic synaptic currents from Purkinje cells (within the cerebellar cortex and excitatory (glutamatergic synaptic currents from mossy fibres. Those two deep cerebellar nucleus inputs are thought to be also adaptive, embedding interesting properties in the framework of accurate movements. We show that distributed spike-timing-dependent plasticity mechanisms (STDP located at different cerebellar sites (parallel fibres to Purkinje cells, mossy fibres to deep cerebellar nucleus cells, and Purkinje cells to deep cerebellar nucleus cells in close-loop simulations provide an explanation for the complex learning properties of the cerebellum in motor learning. Concretely, we propose a new mechanistic cerebellar spiking model. In this new model, deep cerebellar nuclei embed a dual functionality: deep cerebellar nuclei acting as a gain adaptation mechanism and as a facilitator for the slow memory consolidation at mossy fibres to deep cerebellar nucleus synapses. Equipping the cerebellum with excitatory (e-STDP and inhibitory (i-STDP mechanisms at deep cerebellar nuclei afferents allows the accommodation of synaptic memories that were formed at parallel fibres to Purkinje cells synapses and then transferred to mossy fibres to deep cerebellar nucleus synapses. These adaptive mechanisms also contribute to modulate the deep-cerebellar-nucleus-output firing rate (output gain modulation towards optimising its working range.

  3. Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns.

    Directory of Open Access Journals (Sweden)

    Qiang Yu

    Full Text Available A new learning rule (Precise-Spike-Driven (PSD Synaptic Plasticity is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.

  4. Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns.

    Science.gov (United States)

    Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou

    2013-01-01

    A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.

  5. Six transformer based asymmetrical embedded Z-source inverters

    DEFF Research Database (Denmark)

    Wei, Mo; Poh Chiang, Loh; Chi, Jin

    2013-01-01

    Embedded/Asymmetrical embedded Z-source inverters were proposed to maintain smooth input current/voltage across the dc source and within the impedance network, remain the shoot-through feature used to boost up the dc-link voltage without adding bulky filter at input side. This paper introduces a ...... a class of transformer based asymmetrical embedded Z-source inverters which keep the smooth input current and voltage while achieving enhanced voltage boost capability. The presented inverters are verified by laboratory prototypes experimentally....

  6. Security Implications for Ultra-Low Power Configurable SoC FPAA Embedded Systems

    Directory of Open Access Journals (Sweden)

    Jennifer Hasler

    2018-06-01

    Full Text Available We discuss the impact of physical computing techniques to classifying network security issues for ultra-low power networked IoT devices. Physical computing approaches enable at least a factor of 1000 improvement in computational energy efficiency empowering a new generation of local computational structures for embedded IoT devices. These techniques offer computational capability to address network security concerns. This paper begins the discussion of security opportunities for, and issues using, FPAA devices for small embedded IoT platforms. These FPAAs enable devices often utilized for low-power context aware computation. Embedded FPAA devices have both positive Security attributes, as well as potential vulnerabilities. FPAA devices can be part of the resulting secure computation, such as implementing unique functions. FPAA devices can be used investigate security of analog/mixed signal capabilities. The paper concludes with summarizing key improvements for secure ultra-low power embedded FPAA devices.

  7. Physical implementation of pair-based spike timing dependent plasticity

    International Nuclear Information System (INIS)

    Azghadi, M.R.; Al-Sarawi, S.; Iannella, N.; Abbott, D.

    2011-01-01

    Full text: Objective Spike-timing-dependent plasticity (STOP) is one of several plasticity rules which leads to learning and memory in the brain. STOP induces synaptic weight changes based on the timing of the pre- and post-synaptic neurons. A neural network which can mimic the adaptive capability of biological brains in the temporal domain, requires the weight of single connections to be altered by spike timing. To physically realise this network into silicon, a large number of interconnected STOP circuits on the same substrate is required. This imposes two significant limitations in terms of power and area. To cover these limitations, very large scale integrated circuit (VLSI) technology provides attractive features in terms of low power and small area requirements. An example is demonstrated by (lndiveli et al. 2006). The objective of this paper is to present a new implementation of the STOP circuit which demonstrates better power and area in comparison to previous implementations. Methods The proposed circuit uses complementary metal oxide semiconductor (CMOS) technology as depicted in Fig. I. The synaptic weight can be stored on a capacitor and charging/discharging current can lead to potentiation and depression. HSpice simulation results demonstrate that the average power, peak power, and area of the proposed circuit have been reduced by 6, 8 and 15%, respectively, in comparison with Indiveri's implementation. These improvements naturally lead to packing more STOP circuits onto the same substrate, when compared to previous proposals. Hence, this new implementation is quite interesting for real-world large neural networks.

  8. A stationary wavelet transform and a time-frequency based spike detection algorithm for extracellular recorded data.

    Science.gov (United States)

    Lieb, Florian; Stark, Hans-Georg; Thielemann, Christiane

    2017-06-01

    Spike detection from extracellular recordings is a crucial preprocessing step when analyzing neuronal activity. The decision whether a specific part of the signal is a spike or not is important for any kind of other subsequent preprocessing steps, like spike sorting or burst detection in order to reduce the classification of erroneously identified spikes. Many spike detection algorithms have already been suggested, all working reasonably well whenever the signal-to-noise ratio is large enough. When the noise level is high, however, these algorithms have a poor performance. In this paper we present two new spike detection algorithms. The first is based on a stationary wavelet energy operator and the second is based on the time-frequency representation of spikes. Both algorithms are more reliable than all of the most commonly used methods. The performance of the algorithms is confirmed by using simulated data, resembling original data recorded from cortical neurons with multielectrode arrays. In order to demonstrate that the performance of the algorithms is not restricted to only one specific set of data, we also verify the performance using a simulated publicly available data set. We show that both proposed algorithms have the best performance under all tested methods, regardless of the signal-to-noise ratio in both data sets. This contribution will redound to the benefit of electrophysiological investigations of human cells. Especially the spatial and temporal analysis of neural network communications is improved by using the proposed spike detection algorithms.

  9. Coronavirus spike-receptor interactions

    NARCIS (Netherlands)

    Mou, H.

    2015-01-01

    Coronaviruses cause important diseases in humans and animals. Coronavirus infection starts with the virus binding with its spike proteins to molecules present on the surface of host cells that act as receptors. This spike-receptor interaction is highly specific and determines the virus’ cell, tissue

  10. A Model of Electrically Stimulated Auditory Nerve Fiber Responses with Peripheral and Central Sites of Spike Generation

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian

    2017-01-01

    . A single ANF is modeled as a network of two exponential integrateand-fire point-neuron models, referred to as peripheral and central axons of the ANF. The peripheral axon is excited by the cathodic charge, inhibited by the anodic charge, and exhibits longer spike latencies than the central axon......A computational model of cat auditory nerve fiber (ANF) responses to electrical stimulation is presented. The model assumes that (1) there exist at least two sites of spike generation along the ANF and (2) both an anodic (positive) and a cathodic (negative) charge in isolation can evoke a spike......; the central axon is excited by the anodic charge, inhibited by the cathodic charge, and exhibits shorter spike latencies than the peripheral axon. The model also includes subthreshold and suprathreshold adaptive feedback loops which continuously modify the membrane potential and can account for effects...

  11. Synaptic Plasticity and Spike Synchronisation in Neuronal Networks

    Science.gov (United States)

    Borges, Rafael R.; Borges, Fernando S.; Lameu, Ewandson L.; Protachevicz, Paulo R.; Iarosz, Kelly C.; Caldas, Iberê L.; Viana, Ricardo L.; Macau, Elbert E. N.; Baptista, Murilo S.; Grebogi, Celso; Batista, Antonio M.

    2017-12-01

    Brain plasticity, also known as neuroplasticity, is a fundamental mechanism of neuronal adaptation in response to changes in the environment or due to brain injury. In this review, we show our results about the effects of synaptic plasticity on neuronal networks composed by Hodgkin-Huxley neurons. We show that the final topology of the evolved network depends crucially on the ratio between the strengths of the inhibitory and excitatory synapses. Excitation of the same order of inhibition revels an evolved network that presents the rich-club phenomenon, well known to exist in the brain. For initial networks with considerably larger inhibitory strengths, we observe the emergence of a complex evolved topology, where neurons sparsely connected to other neurons, also a typical topology of the brain. The presence of noise enhances the strength of both types of synapses, but if the initial network has synapses of both natures with similar strengths. Finally, we show how the synchronous behaviour of the evolved network will reflect its evolved topology.

  12. Establishing a Statistical Link between Network Oscillations and Neural Synchrony.

    Directory of Open Access Journals (Sweden)

    Pengcheng Zhou

    2015-10-01

    Full Text Available Pairs of active neurons frequently fire action potentials or "spikes" nearly synchronously (i.e., within 5 ms of each other. This spike synchrony may occur by chance, based solely on the neurons' fluctuating firing patterns, or it may occur too frequently to be explicable by chance alone. When spike synchrony above chances levels is present, it may subserve computation for a specific cognitive process, or it could be an irrelevant byproduct of such computation. Either way, spike synchrony is a feature of neural data that should be explained. A point process regression framework has been developed previously for this purpose, using generalized linear models (GLMs. In this framework, the observed number of synchronous spikes is compared to the number predicted by chance under varying assumptions about the factors that affect each of the individual neuron's firing-rate functions. An important possible source of spike synchrony is network-wide oscillations, which may provide an essential mechanism of network information flow. To establish the statistical link between spike synchrony and network-wide oscillations, we have integrated oscillatory field potentials into our point process regression framework. We first extended a previously-published model of spike-field association and showed that we could recover phase relationships between oscillatory field potentials and firing rates. We then used this new framework to demonstrate the statistical relationship between oscillatory field potentials and spike synchrony in: 1 simulated neurons, 2 in vitro recordings of hippocampal CA1 pyramidal cells, and 3 in vivo recordings of neocortical V4 neurons. Our results provide a rigorous method for establishing a statistical link between network oscillations and neural synchrony.

  13. On the genesis of spike-wave oscillations in a mean-field model of human thalamic and corticothalamic dynamics

    International Nuclear Information System (INIS)

    Rodrigues, Serafim; Terry, John R.; Breakspear, Michael

    2006-01-01

    In this Letter, the genesis of spike-wave activity-a hallmark of many generalized epileptic seizures-is investigated in a reduced mean-field model of human neural activity. Drawing upon brain modelling and dynamical systems theory, we demonstrate that the thalamic circuitry of the system is crucial for the generation of these abnormal rhythms, observing that the combination of inhibition from reticular nuclei and excitation from the cortical signal, interplay to generate the spike-wave oscillation. The mechanism revealed provides an explanation of why approaches based on linear stability and Heaviside approximations to the activation function have failed to explain the phenomena of spike-wave behaviour in mean-field models. A mathematical understanding of this transition is a crucial step towards relating spiking network models and mean-field approaches to human brain modelling

  14. Consensus-Based Sorting of Neuronal Spike Waveforms.

    Science.gov (United States)

    Fournier, Julien; Mueller, Christian M; Shein-Idelson, Mark; Hemberger, Mike; Laurent, Gilles

    2016-01-01

    Optimizing spike-sorting algorithms is difficult because sorted clusters can rarely be checked against independently obtained "ground truth" data. In most spike-sorting algorithms in use today, the optimality of a clustering solution is assessed relative to some assumption on the distribution of the spike shapes associated with a particular single unit (e.g., Gaussianity) and by visual inspection of the clustering solution followed by manual validation. When the spatiotemporal waveforms of spikes from different cells overlap, the decision as to whether two spikes should be assigned to the same source can be quite subjective, if it is not based on reliable quantitative measures. We propose a new approach, whereby spike clusters are identified from the most consensual partition across an ensemble of clustering solutions. Using the variability of the clustering solutions across successive iterations of the same clustering algorithm (template matching based on K-means clusters), we estimate the probability of spikes being clustered together and identify groups of spikes that are not statistically distinguishable from one another. Thus, we identify spikes that are most likely to be clustered together and therefore correspond to consistent spike clusters. This method has the potential advantage that it does not rely on any model of the spike shapes. It also provides estimates of the proportion of misclassified spikes for each of the identified clusters. We tested our algorithm on several datasets for which there exists a ground truth (simultaneous intracellular data), and show that it performs close to the optimum reached by a support vector machine trained on the ground truth. We also show that the estimated rate of misclassification matches the proportion of misclassified spikes measured from the ground truth data.

  15. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits.

    Directory of Open Access Journals (Sweden)

    Danielle S Bassett

    2010-04-01

    Full Text Available Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.

  16. Growth-expectations among women entrepreneurs: embedded in networks and culture in Algeria, Morocco, Tunisia and in Belgium and France

    DEFF Research Database (Denmark)

    Cheraghi, Maryam; Setti, Zakia; Schøtt, Thomas

    2014-01-01

    Entrepreneurship Monitor, randomly sampling 39,336 women, including 2,306 entrepreneurs. Analyses show that women entrepreneurs have growth-expectations based on their background and increased by their competence and opportunity-motive, which also promote business networks around their firms. Formation......An entrepreneur usually has an expectation for the firm, expecting expansion, stability or contraction. Expectation is influenced by the entrepreneur's attributes, but expectation is also embedded in the micro-environment of networking and the macro-environment of culture. Traditional culture...... and secular-rational culture differ in roles for women, which influence women entrepreneurs' networking and expectations. The design compares cultures, with data from three traditional societies, Algeria, Morocco and Tunisia and two secular-rational societies, France and Belgium, surveyed in the Global...

  17. Visually Evoked Spiking Evolves While Spontaneous Ongoing Dynamics Persist

    Science.gov (United States)

    Huys, Raoul; Jirsa, Viktor K.; Darokhan, Ziauddin; Valentiniene, Sonata; Roland, Per E.

    2016-01-01

    Neurons in the primary visual cortex spontaneously spike even when there are no visual stimuli. It is unknown whether the spiking evoked by visual stimuli is just a modification of the spontaneous ongoing cortical spiking dynamics or whether the spontaneous spiking state disappears and is replaced by evoked spiking. This study of laminar recordings of spontaneous spiking and visually evoked spiking of neurons in the ferret primary visual cortex shows that the spiking dynamics does not change: the spontaneous spiking as well as evoked spiking is controlled by a stable and persisting fixed point attractor. Its existence guarantees that evoked spiking return to the spontaneous state. However, the spontaneous ongoing spiking state and the visual evoked spiking states are qualitatively different and are separated by a threshold (separatrix). The functional advantage of this organization is that it avoids the need for a system reorganization following visual stimulation, and impedes the transition of spontaneous spiking to evoked spiking and the propagation of spontaneous spiking from layer 4 to layers 2–3. PMID:26778982

  18. Density-dependence of functional spiking networks in vitro

    Energy Technology Data Exchange (ETDEWEB)

    Ham, Michael I [Los Alamos National Laboratory; Gintautuas, Vadas [Los Alamos National Laboratory; Rodriguez, Marko A [Los Alamos National Laboratory; Bettencourt, Luis M A [Los Alamos National Laboratory; Bennett, Ryan [UNIV OF NORTH TEXAS; Santa Maria, Cara L [UNIV OF NORTH TEXAS

    2008-01-01

    During development, the mammalian brain differentiates into specialized regions with unique functional abilities. While many factors contribute to this functional specialization, we explore the effect neuronal density can have on neuronal interactions. Two types of networks, dense (50,000 neurons and glia support cells) and sparse (12,000 neurons and glia support cells), are studied. A competitive first response model is applied to construct activation graphs that represent pairwise neuronal interactions. By observing the evolution of these graphs during development in vitro we observe that dense networks form activation connections earlier than sparse networks, and that link-!llltropy analysis of the resulting dense activation graphs reveals that balanced directional connections dominate. Information theoretic measures reveal in addition that early functional information interactions (of order 3) are synergetic in both dense and sparse networks. However, during development in vitro, such interactions become redundant in dense, but not sparse networks. Large values of activation graph link-entropy correlate strongly with redundant ensembles observed in the dense networks. Results demonstrate differences between dense and sparse networks in terms of informational groups, pairwise relationships, and activation graphs. These differences suggest that variations in cell density may result in different functional specialization of nervous system tissue also in vivo.

  19. The race to learn: spike timing and STDP can coordinate learning and recall in CA3.

    Science.gov (United States)

    Nolan, Christopher R; Wyeth, Gordon; Milford, Michael; Wiles, Janet

    2011-06-01

    The CA3 region of the hippocampus has long been proposed as an autoassociative network performing pattern completion on known inputs. The dentate gyrus (DG) region is often proposed as a network performing the complementary function of pattern separation. Neural models of pattern completion and separation generally designate explicit learning phases to encode new information and assume an ideal fixed threshold at which to stop learning new patterns and begin recalling known patterns. Memory systems are significantly more complex in practice, with the degree of memory recall depending on context-specific goals. Here, we present our spike-timing separation and completion (STSC) model of the entorhinal cortex (EC), DG, and CA3 network, ascribing to each region a role similar to that in existing models but adding a temporal dimension by using a spiking neural network. Simulation results demonstrate that (a) spike-timing dependent plasticity in the EC-CA3 synapses provides a pattern completion ability without recurrent CA3 connections, (b) the race between activation of CA3 cells via EC-CA3 synapses and activation of the same cells via DG-CA3 synapses distinguishes novel from known inputs, and (c) modulation of the EC-CA3 synapses adjusts the learned versus test input similarity required to evoke a direct CA3 response prior to any DG activity, thereby adjusting the pattern completion threshold. These mechanisms suggest that spike timing can arbitrate between learning and recall based on the novelty of each individual input, ensuring control of the learn-recall decision resides in the same subsystem as the learned memories themselves. The proposed modulatory signal does not override this decision but biases the system toward either learning or recall. The model provides an explanation for empirical observations that a reduction in novelty produces a corresponding reduction in the latency of responses in CA3 and CA1. Copyright © 2010 Wiley-Liss, Inc.

  20. Location and optimization analysis of capillary tube network embedded in active tuning building wall

    International Nuclear Information System (INIS)

    Niu, Fuxin; Yu, Yuebin

    2016-01-01

    In this study, a building wall with a thermal tuning function is further investigated. This design turns the building wall from a passive thermal system to an active system. A capillary tube network is installed inside the wall to manipulate the thermodynamics and realize more flexibility and potentials of the wall. This novel building wall structure performs efficiently in terms of building load reduction and supplementary heating and cooling, and the structure is convenient for applying low grade or natural energy with a wider temperature range. The capillary tube network's location inside the wall greatly impacts the thermal and energy performance of the building wall. The effects of three locations including external, middle and internal side are analyzed. The results indicate that the internal wall surface temperature can be neutralized from the ambient environment when the embedded tubes are fed with thermal water. The wall can work with a wide range of water temperature and the optimal location of the tube network is relatively constant in different modes. Power benefit with the wall changes from 2 W to 39 W when the outdoor air temperature changes, higher in summer than in winter. - Highlights: • A building wall with a tuning function is proposed using a capillary pipe network. • Low-grade thermal water can be used to actively manipulate the thermal mass. • Location of the capillary network is investigated to maximize the performance. • The innovation can potentially lower down the grade of energy use in buildings.