WorldWideScience

Sample records for neurons parallels learning

  1. Parallel olfactory processing in the honey bee brain: odor learning and generalization under selective lesion of a projection neuron tract

    Directory of Open Access Journals (Sweden)

    Julie eCarcaud

    2016-01-01

    Full Text Available The function of parallel neural processing is a fundamental problem in Neuroscience, as it is found across sensory modalities and evolutionary lineages, from insects to humans. Recently, parallel processing has attracted increased attention in the olfactory domain, with the demonstration in both insects and mammals that different populations of second-order neurons encode and/or process odorant information differently. Among insects, Hymenoptera present a striking olfactory system with a clear neural dichotomy from the periphery to higher-order centers, based on two main tracts of second-order (projection neurons: the medial and lateral antennal lobe tracts (m-ALT and l-ALT. To unravel the functional role of these two pathways, we combined specific lesions of the m-ALT tract with behavioral experiments, using the classical conditioning of the proboscis extension response (PER conditioning. Lesioned and intact bees had to learn to associate an odorant (1-nonanol with sucrose. Then the bees were subjected to a generalization procedure with a range of odorants differing in terms of their carbon chain length or functional group. We show that m-ALT lesion strongly affects acquisition of an odor-sucrose association. However, lesioned bees that still learned the association showed a normal gradient of decreasing generalization responses to increasingly dissimilar odorants. Generalization responses could be predicted to some extent by in vivo calcium imaging recordings of l-ALT neurons. The m-ALT pathway therefore seems necessary for normal classical olfactory conditioning performance.

  2. Parallel Olfactory Processing in the Honey Bee Brain: Odor Learning and Generalization under Selective Lesion of a Projection Neuron Tract.

    Science.gov (United States)

    Carcaud, Julie; Giurfa, Martin; Sandoz, Jean Christophe

    2015-01-01

    The function of parallel neural processing is a fundamental problem in Neuroscience, as it is found across sensory modalities and evolutionary lineages, from insects to humans. Recently, parallel processing has attracted increased attention in the olfactory domain, with the demonstration in both insects and mammals that different populations of second-order neurons encode and/or process odorant information differently. Among insects, Hymenoptera present a striking olfactory system with a clear neural dichotomy from the periphery to higher-order centers, based on two main tracts of second-order (projection) neurons: the medial and lateral antennal lobe tracts (m-ALT and l-ALT). To unravel the functional role of these two pathways, we combined specific lesions of the m-ALT tract with behavioral experiments, using the classical conditioning of the proboscis extension response (PER conditioning). Lesioned and intact bees had to learn to associate an odorant (1-nonanol) with sucrose. Then the bees were subjected to a generalization procedure with a range of odorants differing in terms of their carbon chain length or functional group. We show that m-ALT lesion strongly affects acquisition of an odor-sucrose association. However, lesioned bees that still learned the association showed a normal gradient of decreasing generalization responses to increasingly dissimilar odorants. Generalization responses could be predicted to some extent by in vivo calcium imaging recordings of l-ALT neurons. The m-ALT pathway therefore seems necessary for normal classical olfactory conditioning performance.

  3. Parallel network simulations with NEURON.

    Science.gov (United States)

    Migliore, M; Cannia, C; Lytton, W W; Markram, Henry; Hines, M L

    2006-10-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2,000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored.

  4. Neuronal avalanches and learning

    Energy Technology Data Exchange (ETDEWEB)

    Arcangelis, Lucilla de, E-mail: dearcangelis@na.infn.it [Department of Information Engineering and CNISM, Second University of Naples, 81031 Aversa (Italy)

    2011-05-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  5. Learning Parallel Computations with ParaLab

    OpenAIRE

    Kozinov, E.; Shtanyuk, A.

    2015-01-01

    In this paper, we present the ParaLab teachware system, which can be used for learning the parallel computation methods. ParaLab provides the tools for simulating the multiprocessor computational systems with various network topologies, for carrying out the computational experiments in the simulation mode, and for evaluating the efficiency of the parallel computation methods. The visual presentation of the parallel computations taking place in the computational experiments is the key feature ...

  6. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  7. Parallel and Mixed Hardware Implementation of Artificial Neuron Network on the FPGA Platform

    Directory of Open Access Journals (Sweden)

    ATIBI Mohamed

    2014-10-01

    Full Text Available Most applications in different fields (automotive, robotics, medical… take advantage of the proven performance by artificial neural networks to solve their most complex problems. The architecture chosen for implementation is the multilayer perceptron that uses retro propagation as a learning algorithm. This article presents modular hardware implementation of multilayer perceptron architecture of artificial neuron network ‘ANN’, in the FPGA platform according to two models (parallel and mixed hardware implementation, and the comparison between these two implementations in terms of hardware resources and execution time. The two implementations are based on the proposed module of a formal neuron with the sigmoid activation function.

  8. Parallel Volunteer Learning during Youth Programs

    Science.gov (United States)

    Lesmeister, Marilyn K.; Green, Jeremy; Derby, Amy; Bothum, Candi

    2012-01-01

    Lack of time is a hindrance for volunteers to participate in educational opportunities, yet volunteer success in an organization is tied to the orientation and education they receive. Meeting diverse educational needs of volunteers can be a challenge for program managers. Scheduling a Volunteer Learning Track for chaperones that is parallel to a…

  9. Automating parallel implementation of neural learning algorithms.

    Science.gov (United States)

    Rana, O F

    2000-06-01

    Neural learning algorithms generally involve a number of identical processing units, which are fully or partially connected, and involve an update function, such as a ramp, a sigmoid or a Gaussian function for instance. Some variations also exist, where units can be heterogeneous, or where an alternative update technique is employed, such as a pulse stream generator. Associated with connections are numerical values that must be adjusted using a learning rule, and and dictated by parameters that are learning rule specific, such as momentum, a learning rate, a temperature, amongst others. Usually, neural learning algorithms involve local updates, and a global interaction between units is often discouraged, except in instances where units are fully connected, or involve synchronous updates. In all of these instances, concurrency within a neural algorithm cannot be fully exploited without a suitable implementation strategy. A design scheme is described for translating a neural learning algorithm from inception to implementation on a parallel machine using PVM or MPI libraries, or onto programmable logic such as FPGAs. A designer must first describe the algorithm using a specialised Neural Language, from which a Petri net (PN) model is constructed automatically for verification, and building a performance model. The PN model can be used to study issues such as synchronisation points, resource sharing and concurrency within a learning rule. Specialised constructs are provided to enable a designer to express various aspects of a learning rule, such as the number and connectivity of neural nodes, the interconnection strategies, and information flows required by the learning algorithm. A scheduling and mapping strategy is then used to translate this PN model onto a multiprocessor template. We demonstrate our technique using a Kohonen and backpropagation learning rules, implemented on a loosely coupled workstation cluster, and a dedicated parallel machine, with PVM libraries.

  10. Pigeons acquire multiple categories in parallel via associative learning: a parallel to human word learning?

    Science.gov (United States)

    Wasserman, Edward A; Brooks, Daniel I; McMurray, Bob

    2015-03-01

    Might there be parallels between category learning in animals and word learning in children? To examine this possibility, we devised a new associative learning technique for teaching pigeons to sort 128 photographs of objects into 16 human language categories. We found that pigeons learned all 16 categories in parallel, they perceived the perceptual coherence of the different object categories, and they generalized their categorization behavior to novel photographs from the training categories. More detailed analyses of the factors that predict trial-by-trial learning implicated a number of factors that may shape learning. First, we found considerable trial-by-trial dependency of pigeons' categorization responses, consistent with several recent studies that invoke this dependency to claim that humans acquire words via symbolic or inferential mechanisms; this finding suggests that such dependencies may also arise in associative systems. Second, our trial-by-trial analyses divulged seemingly irrelevant aspects of the categorization task, like the spatial location of the report responses, which influenced learning. Third, those trial-by-trial analyses also supported the possibility that learning may be determined both by strengthening correct stimulus-response associations and by weakening incorrect stimulus-response associations. The parallel between all these findings and important aspects of human word learning suggests that associative learning mechanisms may play a much stronger part in complex human behavior than is commonly believed.

  11. A parallel framework for Bayesian reinforcement learning

    Science.gov (United States)

    Barrett, Enda; Duggan, Jim; Howley, Enda

    2014-01-01

    Solving a finite Markov decision process using techniques from dynamic programming such as value or policy iteration require a complete model of the environmental dynamics. The distribution of rewards, transition probabilities, states and actions all need to be fully observable, discrete and complete. For many problem domains, a complete model containing a full representation of the environmental dynamics may not be readily available. Bayesian reinforcement learning (RL) is a technique devised to make better use of the information observed through learning than simply computing Q-functions. However, this approach can often require extensive experience in order to build up an accurate representation of the true values. To address this issue, this paper proposes a method for parallelising a Bayesian RL technique aimed at reducing the time it takes to approximate the missing model. We demonstrate the technique on learning next state transition probabilities without prior knowledge. The approach is general enough for approximating any probabilistically driven component of the model. The solution involves multiple learning agents learning in parallel on the same task. Agents share probability density estimates amongst each other in an effort to speed up convergence to the true values.

  12. Parallelization of the ROOT Machine Learning Methods

    CERN Document Server

    Vakilipourtakalou, Pourya

    2016-01-01

    Today computation is an inseparable part of scientific research. Specially in Particle Physics when there is a classification problem like discrimination of Signals from Backgrounds originating from the collisions of particles. On the other hand, Monte Carlo simulations can be used in order to generate a known data set of Signals and Backgrounds based on theoretical physics. The aim of Machine Learning is to train some algorithms on known data set and then apply these trained algorithms to the unknown data sets. However, the most common framework for data analysis in Particle Physics is ROOT. In order to use Machine Learning methods, a Toolkit for Multivariate Data Analysis (TMVA) has been added to ROOT. The major consideration in this report is the parallelization of some TMVA methods, specially Cross-Validation and BDT.

  13. PSEE: A Tool for Parallel Systems Learning

    OpenAIRE

    E. Luque; R. Suppi; Sorribes, J.; E. Cesar; J. Falguera; Serrano, M.

    2012-01-01

    Programs for parallel computers of distributed memory are difficult to write, understand, evaluate and debug. The design and performance evaluation of algorithms is much more complex than the conventional sequential one. The technical know/how necessary for the implementation of parallel systems is already available, but a critical problem is in the handling of complexity. In parallel distributed memory systems the performance is highly influenced by factors as interconnection scheme, granula...

  14. Neuronal Rac1 is required for learning-evoked neurogenesis

    DEFF Research Database (Denmark)

    Haditsch, Ursula; Anderson, Matthew P; Freewoman, Julia

    2013-01-01

    neurons for normal synaptic plasticity in vivo, and here we show that selective loss of neuronal Rac1 also impairs the learning-evoked increase in neurogenesis in the adult mouse hippocampus. Earlier work has indicated that experience elevates the abundance of adult-born neurons in the hippocampus...... primarily by enhancing the survival of neurons produced just before the learning event. Loss of Rac1 in mature projection neurons did reduce learning-evoked neurogenesis but, contrary to our expectations, these effects were not mediated by altering the survival of young neurons in the hippocampus. Instead......, loss of neuronal Rac1 activation selectively impaired a learning-evoked increase in the proliferation and accumulation of neural precursors generated during the learning event itself. This indicates that experience-induced alterations in neurogenesis can be mechanistically resolved into two effects: (1...

  15. Parallel optical control of spatiotemporal neuronal spike activity using high-frequency digital light processingtechnology

    Directory of Open Access Journals (Sweden)

    Jason eJerome

    2011-08-01

    Full Text Available Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses Digital-Light-Processing (DLP technology to generate 2-dimensional (2D stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 µm and temporal (>13kHz resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 x 2.07 mm2 of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.

  16. Parallel optical control of spatiotemporal neuronal spike activity using high-speed digital light processing.

    Science.gov (United States)

    Jerome, Jason; Foehring, Robert C; Armstrong, William E; Spain, William J; Heck, Detlef H

    2011-01-01

    Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses digital light processing technology to generate 2-dimensional (2D) stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 μm) and temporal (>13 kHz) resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 mm × 2.07 mm) of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.

  17. A new supervised learning algorithm for spiking neurons.

    Science.gov (United States)

    Xu, Yan; Zeng, Xiaoqin; Zhong, Shuiming

    2013-06-01

    The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.

  18. Design of silicon brains in the nano-CMOS era: spiking neurons, learning synapses and neural architecture optimization.

    Science.gov (United States)

    Cassidy, Andrew S; Georgiou, Julius; Andreou, Andreas G

    2013-09-01

    We present a design framework for neuromorphic architectures in the nano-CMOS era. Our approach to the design of spiking neurons and STDP learning circuits relies on parallel computational structures where neurons are abstracted as digital arithmetic logic units and communication processors. Using this approach, we have developed arrays of silicon neurons that scale to millions of neurons in a single state-of-the-art Field Programmable Gate Array (FPGA). We demonstrate the validity of the design methodology through the implementation of cortical development in a circuit of spiking neurons, STDP synapses, and neural architecture optimization. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Synaptic Plasticity onto Dopamine Neurons Shapes Fear Learning.

    Science.gov (United States)

    Pignatelli, Marco; Umanah, George Kwabena Essien; Ribeiro, Sissi Palma; Chen, Rong; Karuppagounder, Senthilkumar Senthil; Yau, Hau-Jie; Eacker, Stephen; Dawson, Valina Lynn; Dawson, Ted Murray; Bonci, Antonello

    2017-01-18

    Fear learning is a fundamental behavioral process that requires dopamine (DA) release. Experience-dependent synaptic plasticity occurs on DA neurons while an organism is engaged in aversive experiences. However, whether synaptic plasticity onto DA neurons is causally involved in aversion learning is unknown. Here, we show that a stress priming procedure enhances fear learning by engaging VTA synaptic plasticity. Moreover, we took advantage of the ability of the ATPase Thorase to regulate the internalization of AMPA receptors (AMPARs) in order to selectively manipulate glutamatergic synaptic plasticity on DA neurons. Genetic ablation of Thorase in DAT(+) neurons produced increased AMPAR surface expression and function that lead to impaired induction of both long-term depression (LTD) and long-term potentiation (LTP). Strikingly, animals lacking Thorase in DAT(+) neurons expressed greater associative learning in a fear conditioning paradigm. In conclusion, our data provide a novel, causal link between synaptic plasticity onto DA neurons and fear learning. Published by Elsevier Inc.

  20. A causal link between prediction errors, dopamine neurons and learning.

    Science.gov (United States)

    Steinberg, Elizabeth E; Keiflin, Ronald; Boivin, Josiah R; Witten, Ilana B; Deisseroth, Karl; Janak, Patricia H

    2013-07-01

    Situations in which rewards are unexpectedly obtained or withheld represent opportunities for new learning. Often, this learning includes identifying cues that predict reward availability. Unexpected rewards strongly activate midbrain dopamine neurons. This phasic signal is proposed to support learning about antecedent cues by signaling discrepancies between actual and expected outcomes, termed a reward prediction error. However, it is unknown whether dopamine neuron prediction error signaling and cue-reward learning are causally linked. To test this hypothesis, we manipulated dopamine neuron activity in rats in two behavioral procedures, associative blocking and extinction, that illustrate the essential function of prediction errors in learning. We observed that optogenetic activation of dopamine neurons concurrent with reward delivery, mimicking a prediction error, was sufficient to cause long-lasting increases in cue-elicited reward-seeking behavior. Our findings establish a causal role for temporally precise dopamine neuron signaling in cue-reward learning, bridging a critical gap between experimental evidence and influential theoretical frameworks.

  1. Fast reversible learning based on neurons functioning as anisotropic multiplex hubs

    Science.gov (United States)

    Vardi, Roni; Goldental, Amir; Sheinin, Anton; Sardi, Shira; Kanter, Ido

    2017-05-01

    Neural networks are composed of neurons and synapses, which are responsible for learning in a slow adaptive dynamical process. Here we experimentally show that neurons act like independent anisotropic multiplex hubs, which relay and mute incoming signals following their input directions. Theoretically, the observed information routing enriches the computational capabilities of neurons by allowing, for instance, equalization among different information routes in the network, as well as high-frequency transmission of complex time-dependent signals constructed via several parallel routes. In addition, this kind of hubs adaptively eliminate very noisy neurons from the dynamics of the network, preventing masking of information transmission. The timescales for these features are several seconds at most, as opposed to the imprint of information by the synaptic plasticity, a process which exceeds minutes. Results open the horizon to the understanding of fast and adaptive learning realities in higher cognitive brain's functionalities.

  2. Parallel expression of synaptophysin and evoked neurotransmitter release during development of cultured neurons

    DEFF Research Database (Denmark)

    Ehrhart-Bornstein, M; Treiman, M; Hansen, Gert Helge;

    1991-01-01

    and neurotransmitter release were measured in each of the culture types as a function of development for up to 8 days in vitro, using the same batch of cells for both sets of measurements to obtain optimal comparisons. The content and the distribution of synaptophysin in the developing cells were assessed...... by quantitative immunoblotting and light microscope immunocytochemistry, respectively. In both cell types, a close parallelism was found between the temporal pattern of development in synaptophysin expression and neurotransmitter release. This temporal pattern differed between the two types of neurons....... The cerebral cortex neurons showed a biphasic time course of increase in synaptophysin content, paralleled by a biphasic pattern of development in their ability to release [3H]GABA in response to depolarization by glutamate or elevated K+ concentrations. In contrast, a monophasic, approximately linear increase...

  3. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  4. NMDA receptors in dopaminergic neurons are crucial for habit learning.

    Science.gov (United States)

    Wang, Lei Phillip; Li, Fei; Wang, Dong; Xie, Kun; Wang, Deheng; Shen, Xiaoming; Tsien, Joe Z

    2011-12-22

    Dopamine is crucial for habit learning. Activities of midbrain dopaminergic neurons are regulated by the cortical and subcortical signals among which glutamatergic afferents provide excitatory inputs. Cognitive implications of glutamatergic afferents in regulating and engaging dopamine signals during habit learning, however, remain unclear. Here, we show that mice with dopaminergic neuron-specific NMDAR1 deletion are impaired in a variety of habit-learning tasks, while normal in some other dopamine-modulated functions such as locomotor activities, goal-directed learning, and spatial reference memories. In vivo neural recording revealed that dopaminergic neurons in these mutant mice could still develop the cue-reward association responses; however, their conditioned response robustness was drastically blunted. Our results suggest that integration of glutamatergic inputs to DA neurons by NMDA receptors, likely by regulating associative activity patterns, is a crucial part of the cellular mechanism underpinning habit learning.

  5. Hebbian Learning is about contingency, not contiguity, and explains the emergence of predictive mirror neurons

    NARCIS (Netherlands)

    Keysers, Christian; Perrett, David I.; Gazzola, Valeria

    2014-01-01

    Hebbian Learning should not be reduced to contiguity, as it detects contingency and causality. Hebbian Learning accounts of mirror neurons make predictions that differ from associative learning: Through Hebbian Learning, mirror neurons become dynamic networks that calculate predictions and predictio

  6. Hebbian Learning is about contingency, not contiguity, and explains the emergence of predictive mirror neurons

    NARCIS (Netherlands)

    Keysers, Christian; Perrett, David I.; Gazzola, Valeria

    Hebbian Learning should not be reduced to contiguity, as it detects contingency and causality. Hebbian Learning accounts of mirror neurons make predictions that differ from associative learning: Through Hebbian Learning, mirror neurons become dynamic networks that calculate predictions and

  7. On Scalable Deep Learning and Parallelizing Gradient Descent

    CERN Document Server

    AUTHOR|(CDS)2129036; Möckel, Rico; Baranowski, Zbigniew; Canali, Luca

    Speeding up gradient based methods has been a subject of interest over the past years with many practical applications, especially with respect to Deep Learning. Despite the fact that many optimizations have been done on a hardware level, the convergence rate of very large models remains problematic. Therefore, data parallel methods next to mini-batch parallelism have been suggested to further decrease the training time of parameterized models using gradient based methods. Nevertheless, asynchronous optimization was considered too unstable for practical purposes due to a lacking understanding of the underlying mechanisms. Recently, a theoretical contribution has been made which defines asynchronous optimization in terms of (implicit) momentum due to the presence of a queuing model of gradients based on past parameterizations. This thesis mainly builds upon this work to construct a better understanding why asynchronous optimization shows proportionally more divergent behavior when the number of parallel worker...

  8. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  9. Programmed to Learn? The Ontogeny of Mirror Neurons

    Science.gov (United States)

    Del Giudice, Marco; Manera, Valeria; Keysers, Christian

    2009-01-01

    Mirror neurons are increasingly recognized as a crucial substrate for many developmental processes, including imitation and social learning. Although there has been considerable progress in describing their function and localization in the primate and adult human brain, we still know little about their ontogeny. The idea that mirror neurons result…

  10. Programmed to Learn? The Ontogeny of Mirror Neurons

    Science.gov (United States)

    Del Giudice, Marco; Manera, Valeria; Keysers, Christian

    2009-01-01

    Mirror neurons are increasingly recognized as a crucial substrate for many developmental processes, including imitation and social learning. Although there has been considerable progress in describing their function and localization in the primate and adult human brain, we still know little about their ontogeny. The idea that mirror neurons result…

  11. A new type of neurons for machine learning.

    Science.gov (United States)

    Fan, Fenglei; Cong, Wenxiang; Wang, Ge

    2017-07-27

    In machine learning, an artificial neural network is the mainstream approach. Such a network consists of many neurons. These neurons are of the same type characterized by the 2 features: (1) an inner product of an input vector and a matching weighting vector of trainable parameters and (2) a nonlinear excitation function. Here, we investigate the possibility of replacing the inner product with a quadratic function of the input vector, thereby upgrading the first-order neuron to the second-order neuron, empowering individual neurons and facilitating the optimization of neural networks. Also, numerical examples are provided to illustrate the feasibility and merits of the second-order neurons. Finally, further topics are discussed. Copyright © 2017 John Wiley & Sons, Ltd.

  12. The function of mirror neurons in the learning process

    Directory of Open Access Journals (Sweden)

    Mara Daniel

    2017-01-01

    Full Text Available In the last years, Neurosciences have developed very much, being elaborated many important theories scientific research in the field. The main goal of neuroscience is to understand how groups of neurons interact to create the behavior. Neuroscientists studying the action of molecules, genes and cells. It also explores the complex interactions involved in motion perception, thoughts, emotions and learning. Brick fundamental nervous system is the nerve cell, neuron. Neurons exchange information by sending electrical signals and chemical through connections called synapses. Discovered by a group of Italian researchers from the University of Parma, neurons - mirror are a special class of nerve cells played an important role in the direct knowledge, automatic and unconscious environment. These cortical neurons are activated not only when an action is fulfilled, but when we see how the same action is performed by someone else, they represent neural mechanism by which the actions, intentions and emotions of others can be understood automatically. In childhood neurons - mirror are extremely important. Thanks to them we learned a lot in the early years: smile, to ask for help and, in fact, all the behaviors and family and group norms. People learn by what they see and sense the others. Neurons - mirror are important to understanding the actions and intentions of other people and learn new skills through mirror image. They are involved in planning and controlling actions, abstract thinking and memory. If a child observes an action, neurons - mirror is activated and forming new neural pathways as if even he takes that action. Efficient activity of mirror neurons leads to good development in all areas at a higher emotional intelligence and the ability to empathize with others.

  13. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    Science.gov (United States)

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  14. Hebbian Learning is about contingency, not contiguity, and explains the emergence of predictive mirror neurons.

    Science.gov (United States)

    Keysers, Christian; Perrett, David I; Gazzola, Valeria

    2014-04-01

    Hebbian Learning should not be reduced to contiguity, as it detects contingency and causality. Hebbian Learning accounts of mirror neurons make predictions that differ from associative learning: Through Hebbian Learning, mirror neurons become dynamic networks that calculate predictions and prediction errors and relate to ideomotor theories. The social force of imitation is important for mirror neuron emergence and suggests canalization.

  15. Bifurcation of learning and structure formation in neuronal maps

    DEFF Research Database (Denmark)

    Marschler, Christian; Faust-Ellsässer, Carmen; Starke, Jens

    2014-01-01

    Most learning processes in neuronal networks happen on a much longer time scale than that of the underlying neuronal dynamics. It is therefore useful to analyze slowly varying macroscopic order parameters to explore a network's learning ability. We study the synaptic learning process giving rise...... to map formation in the laminar nucleus of the barn owl's auditory system. Using equation-free methods, we perform a bifurcation analysis of spatio-temporal structure formation in the associated synaptic-weight matrix. This enables us to analyze learning as a bifurcation process and follow the unstable...... states as well. A simple time translation of the learning window function shifts the bifurcation point of structure formation and goes along with traveling waves in the map, without changing the animal's sound localization performance....

  16. Encoding of fear learning and memory in distributed neuronal circuits.

    Science.gov (United States)

    Herry, Cyril; Johansen, Joshua P

    2014-12-01

    How sensory information is transformed by learning into adaptive behaviors is a fundamental question in neuroscience. Studies of auditory fear conditioning have revealed much about the formation and expression of emotional memories and have provided important insights into this question. Classical work focused on the amygdala as a central structure for fear conditioning. Recent advances, however, have identified new circuits and neural coding strategies mediating fear learning and the expression of fear behaviors. One area of research has identified key brain regions and neuronal coding mechanisms that regulate the formation, specificity and strength of fear memories. Other work has discovered critical circuits and neuronal dynamics by which fear memories are expressed through a medial prefrontal cortex pathway and coordinated activity across interconnected brain regions. Here we review these recent advances alongside prior work to provide a working model of the extended circuits and neuronal coding mechanisms mediating fear learning and memory.

  17. Maximization of Learning Speed Due to Neuronal Redundancy in Reinforcement Learning

    Science.gov (United States)

    Takiyama, Ken

    2016-11-01

    Adaptable neural activity contributes to the flexibility of human behavior, which is optimized in situations such as motor learning and decision making. Although learning signals in motor learning and decision making are low-dimensional, neural activity, which is very high dimensional, must be modified to achieve optimal performance based on the low-dimensional signal, resulting in a severe credit-assignment problem. Despite this problem, the human brain contains a vast number of neurons, leaving an open question: what is the functional significance of the huge number of neurons? Here, I address this question by analyzing a redundant neural network with a reinforcement-learning algorithm in which the numbers of neurons and output units are N and M, respectively. Because many combinations of neural activity can generate the same output under the condition of N ≫ M, I refer to the index N - M as neuronal redundancy. Although greater neuronal redundancy makes the credit-assignment problem more severe, I demonstrate that a greater degree of neuronal redundancy facilitates learning speed. Thus, in an apparent contradiction of the credit-assignment problem, I propose the hypothesis that a functional role of a huge number of neurons or a huge degree of neuronal redundancy is to facilitate learning speed.

  18. The cerebellum: a neuronal learning machine?

    Science.gov (United States)

    Raymond, J. L.; Lisberger, S. G.; Mauk, M. D.

    1996-01-01

    Comparison of two seemingly quite different behaviors yields a surprisingly consistent picture of the role of the cerebellum in motor learning. Behavioral and physiological data about classical conditioning of the eyelid response and motor learning in the vestibulo-ocular reflex suggests that (i) plasticity is distributed between the cerebellar cortex and the deep cerebellar nuclei; (ii) the cerebellar cortex plays a special role in learning the timing of movement; and (iii) the cerebellar cortex guides learning in the deep nuclei, which may allow learning to be transferred from the cortex to the deep nuclei. Because many of the similarities in the data from the two systems typify general features of cerebellar organization, the cerebellar mechanisms of learning in these two systems may represent principles that apply to many motor systems.

  19. Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights.

    Science.gov (United States)

    Samadi, Arash; Lillicrap, Timothy P; Tweed, Douglas B

    2017-03-01

    Recent work in computer science has shown the power of deep learning driven by the backpropagation algorithm in networks of artificial neurons. But real neurons in the brain are different from most of these artificial ones in at least three crucial ways: they emit spikes rather than graded outputs, their inputs and outputs are related dynamically rather than by piecewise-smooth functions, and they have no known way to coordinate arrays of synapses in separate forward and feedback pathways so that they change simultaneously and identically, as they do in backpropagation. Given these differences, it is unlikely that current deep learning algorithms can operate in the brain, but we that show these problems can be solved by two simple devices: learning rules can approximate dynamic input-output relations with piecewise-smooth functions, and a variation on the feedback alignment algorithm can train deep networks without having to coordinate forward and feedback synapses. Our results also show that deep spiking networks learn much better if each neuron computes an intracellular teaching signal that reflects that cell's nonlinearity. With this mechanism, networks of spiking neurons show useful learning in synapses at least nine layers upstream from the output cells and perform well compared to other spiking networks in the literature on the MNIST digit recognition task.

  20. Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning of RBMs

    CERN Document Server

    Desjardins, Guillaume; Bengio, Yoshua

    2010-01-01

    Restricted Boltzmann Machines (RBM) have attracted a lot of attention of late, as one the principle building blocks of deep networks. Training RBMs remains problematic however, because of the intractibility of their partition function. The maximum likelihood gradient requires a very robust sampler which can accurately sample from the model despite the loss of ergodicity often incurred during learning. While using Parallel Tempering in the negative phase of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a trade-off between computational complexity and high ergodicity, and requires careful hand-tuning of the temperatures. In this paper, we show that this trade-off is unnecessary. The choice of optimal temperatures can be automated by minimizing average return time (a concept first proposed by [Katzgraber et al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing the computational overhead. We show on a synthetic dataset, that this results in better likelihood ...

  1. Iterative learning control algorithm for spiking behavior of neuron model

    Science.gov (United States)

    Li, Shunan; Li, Donghui; Wang, Jiang; Yu, Haitao

    2016-11-01

    Controlling neurons to generate a desired or normal spiking behavior is the fundamental building block of the treatment of many neurologic diseases. The objective of this work is to develop a novel control method-closed-loop proportional integral (PI)-type iterative learning control (ILC) algorithm to control the spiking behavior in model neurons. In order to verify the feasibility and effectiveness of the proposed method, two single-compartment standard models of different neuronal excitability are specifically considered: Hodgkin-Huxley (HH) model for class 1 neural excitability and Morris-Lecar (ML) model for class 2 neural excitability. ILC has remarkable advantages for the repetitive processes in nature. To further highlight the superiority of the proposed method, the performances of the iterative learning controller are compared to those of classical PI controller. Either in the classical PI control or in the PI control combined with ILC, appropriate background noises are added in neuron models to approach the problem under more realistic biophysical conditions. Simulation results show that the controller performances are more favorable when ILC is considered, no matter which neuronal excitability the neuron belongs to and no matter what kind of firing pattern the desired trajectory belongs to. The error between real and desired output is much smaller under ILC control signal, which suggests ILC of neuron’s spiking behavior is more accurate.

  2. Diffusion and extrusion shape standing calcium gradients during ongoing parallel fiber activity in dendrites of Purkinje neurons.

    Science.gov (United States)

    Schmidt, Hartmut; Arendt, Oliver; Eilers, Jens

    2012-09-01

    Synaptically induced calcium transients in dendrites of Purkinje neurons (PNs) play a key role in the induction of plasticity in the cerebellar cortex (Ito, Physiol Rev 81:1143-1195, 2001). Long-term depression at parallel fiber-PN synapses can be induced by stimulation paradigms that are associated with long-lasting (>1 min) calcium signals. These signals remain strictly localized (Eilers et al., Learn Mem 3:159-168, 1997), an observation that was rather unexpected, given the high concentration of the mobile endogenous calcium-binding proteins parvalbumin and calbindin in PNs (Fierro and Llano, J Physiol (Lond) 496:617-625, 1996; Kosaka et al., Exp Brain Res 93:483-491, 1993). By combining two-photon calcium imaging experiments in acute slices with numerical computer simulations, we found that significant calcium diffusion out of active branches indeed takes places. It is outweighed, however, by rapid and powerful calcium extrusion along the dendritic shaft. The close interplay of diffusion and extrusion defines the spread of calcium between active and inactive dendritic branches, forming a steep gradient in calcium with drop ranges of ~13 μm (interquartile range, 10-18 μm).

  3. A neuronal learning rule for sub-millisecond temporal coding

    Science.gov (United States)

    Gerstner, Wulfram; Kempter, Richard; van Hemmen, J. Leo; Wagner, Hermann

    1996-09-01

    A PARADOX that exists in auditory and electrosensory neural systems1,2 is that they encode behaviourally relevant signals in the range of a few microseconds with neurons that are at least one order of magnitude slower. The importance of temporal coding in neural information processing is not clear yet3-8. A central question is whether neuronal firing can be more precise than the time constants of the neuronal processes involved9. Here we address this problem using the auditory system of the barn owl as an example. We present a modelling study based on computer simulations of a neuron in the laminar nucleus. Three observations explain the paradox. First, spiking of an 'integrate-and-fire' neuron driven by excitatory postsynaptic potentials with a width at half-maximum height of 250 μs, has an accuracy of 25 μs if the presynaptic signals arrive coherently. Second, the necessary degree of coherence in the signal arrival times can be attained during ontogenetic development by virtue of an unsupervised hebbian learning rule. Learning selects connections with matching delays from a broad distribution of axons with random delays. Third, the learning rule also selects the correct delays from two independent groups of inputs, for example, from the left and right ear.

  4. Programmed to learn? The ontogeny of mirror neurons

    NARCIS (Netherlands)

    Del Giudice, Marco; Manera, Valeria; Keysers, Christian

    2009-01-01

    Mirror neurons are increasingly recognized as a crucial substrate for many developmental processes, including imitation and social learning. Although there has been considerable progress in describing their function and localization in the primate and adult human brain, we still know little about th

  5. Programmed to learn? The ontogeny of mirror neurons

    NARCIS (Netherlands)

    Del Giudice, Marco; Manera, Valeria; Keysers, Christian

    Mirror neurons are increasingly recognized as a crucial substrate for many developmental processes, including imitation and social learning. Although there has been considerable progress in describing their function and localization in the primate and adult human brain, we still know little about

  6. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  7. Spatio-temporal credit assignment in neuronal population learning.

    Science.gov (United States)

    Friedrich, Johannes; Urbanczik, Robert; Senn, Walter

    2011-06-01

    In learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain.

  8. Learning and structure of neuronal networks

    Indian Academy of Sciences (India)

    Kiran M Kolwankar; Quansheng Ren; Areejit Samal; Jürgen Jost

    2011-11-01

    We study the effect of learning dynamics on network topology. Firstly, a network of discrete dynamical systems is considered for this purpose and the coupling strengths are made to evolve according to a temporal learning rule that is based on the paradigm of spike-time-dependent plasticity (STDP). This incorporates necessary competition between different edges. The final network we obtain is robust and has a broad degree distribution. Then we study the dynamics of the structure of a formal neural network. For properly chosen input signals, there exists a steady state with a residual network. We compare the motif profile of such a network with that of the real neural network of . elegans and identify robust qualitative similarities. In particular, our extensive numerical simulations show that this STDP-driven resulting network is robust under variations of model parameters.

  9. Learning in Parallel: Using Parallel Corpora to Enhance Written Language Acquisition at the Beginning Level

    Science.gov (United States)

    Bluemel, Brody

    2014-01-01

    This article illustrates the pedagogical value of incorporating parallel corpora in foreign language education. It explores the development of a Chinese/English parallel corpus designed specifically for pedagogical application. The corpus tool was created to aid language learners in reading comprehension and writing development by making foreign…

  10. Parallelized TCSPC for dynamic intravital fluorescence lifetime imaging: quantifying neuronal dysfunction in neuroinflammation.

    Directory of Open Access Journals (Sweden)

    Jan Leo Rinnenthal

    Full Text Available Two-photon laser-scanning microscopy has revolutionized our view on vital processes by revealing motility and interaction patterns of various cell subsets in hardly accessible organs (e.g. brain in living animals. However, current technology is still insufficient to elucidate the mechanisms of organ dysfunction as a prerequisite for developing new therapeutic strategies, since it renders only sparse information about the molecular basis of cellular response within tissues in health and disease. In the context of imaging, Förster resonant energy transfer (FRET is one of the most adequate tools to probe molecular mechanisms of cell function. As a calibration-free technique, fluorescence lifetime imaging (FLIM is superior for quantifying FRET in vivo. Currently, its main limitation is the acquisition speed in the context of deep-tissue 3D and 4D imaging. Here we present a parallelized time-correlated single-photon counting point detector (p-TCSPC (i for dynamic single-beam scanning FLIM of large 3D areas on the range of hundreds of milliseconds relevant in the context of immune-induced pathologies as well as (ii for ultrafast 2D FLIM in the range of tens of milliseconds, a scale relevant for cell physiology. We demonstrate its power in dynamic deep-tissue intravital imaging, as compared to multi-beam scanning time-gated FLIM suitable for fast data acquisition and compared to highly sensitive single-channel TCSPC adequate to detect low fluorescence signals. Using p-TCSPC, 256×256 pixel FLIM maps (300×300 µm(2 are acquired within 468 ms while 131×131 pixel FLIM maps (75×75 µm(2 can be acquired every 82 ms in 115 µm depth in the spinal cord of CerTN L15 mice. The CerTN L15 mice express a FRET-based Ca-biosensor in certain neuronal subsets. Our new technology allows us to perform time-lapse 3D intravital FLIM (4D FLIM in the brain stem of CerTN L15 mice affected by experimental autoimmune encephalomyelitis and, thereby, to truly quantify

  11. Severely impaired learning and altered neuronal morphology in mice lacking NMDA receptors in medium spiny neurons.

    Directory of Open Access Journals (Sweden)

    Lisa R Beutler

    Full Text Available The striatum is composed predominantly of medium spiny neurons (MSNs that integrate excitatory, glutamatergic inputs from the cortex and thalamus, and modulatory dopaminergic inputs from the ventral midbrain to influence behavior. Glutamatergic activation of AMPA, NMDA, and metabotropic receptors on MSNs is important for striatal development and function, but the roles of each of these receptor classes remain incompletely understood. Signaling through NMDA-type glutamate receptors (NMDARs in the striatum has been implicated in various motor and appetitive learning paradigms. In addition, signaling through NMDARs influences neuronal morphology, which could underlie their role in mediating learned behaviors. To study the role of NMDARs on MSNs in learning and in morphological development, we generated mice lacking the essential NR1 subunit, encoded by the Grin1 gene, selectively in MSNs. Although these knockout mice appear normal and display normal 24-hour locomotion, they have severe deficits in motor learning, operant conditioning and active avoidance. In addition, the MSNs from these knockout mice have smaller cell bodies and decreased dendritic length compared to littermate controls. We conclude that NMDAR signaling in MSNs is critical for normal MSN morphology and many forms of learning.

  12. Deciphering mirror neurons: rational decision versus associative learning.

    Science.gov (United States)

    Khalil, Elias L

    2014-04-01

    The rational-decision approach is superior to the associative-learning approach of Cook et al. at explaining why mirror neurons fire or do not fire - even when the stimulus is the same. The rational-decision approach is superior because it starts with the analysis of the intention of the organism, that is, with the identification of the specific objective or goal that the organism is trying to maximize.

  13. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    Science.gov (United States)

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  14. Learning control system design based on 2-D theory - An application to parallel link manipulator

    Science.gov (United States)

    Geng, Z.; Carroll, R. L.; Lee, J. D.; Haynes, L. H.

    1990-01-01

    An approach to iterative learning control system design based on two-dimensional system theory is presented. A two-dimensional model for the iterative learning control system which reveals the connections between learning control systems and two-dimensional system theory is established. A learning control algorithm is proposed, and the convergence of learning using this algorithm is guaranteed by two-dimensional stability. The learning algorithm is applied successfully to the trajectory tracking control problem for a parallel link robot manipulator. The excellent performance of this learning algorithm is demonstrated by the computer simulation results.

  15. Associative and sensorimotor learning for parenting involves mirror neurons under the influence of oxytocin.

    Science.gov (United States)

    Ho, S Shaun; Macdonald, Adam; Swain, James E

    2014-04-01

    Mirror neuron-based associative learning may be understood according to associative learning theories, in addition to sensorimotor learning theories. This is important for a comprehensive understanding of the role of mirror neurons and related hormone modulators, such as oxytocin, in complex social interactions such as among parent-infant dyads and in examples of mirror neuron function that involve abnormal motor systems such as depression.

  16. Spatial learning depends on both the addition and removal of new hippocampal neurons.

    Directory of Open Access Journals (Sweden)

    David Dupret

    2007-08-01

    Full Text Available The role of adult hippocampal neurogenesis in spatial learning remains a matter of debate. Here, we show that spatial learning modifies neurogenesis by inducing a cascade of events that resembles the selective stabilization process characterizing development. Learning promotes survival of relatively mature neurons, apoptosis of more immature cells, and finally, proliferation of neural precursors. These are three interrelated events mediating learning. Thus, blocking apoptosis impairs memory and inhibits learning-induced cell survival and cell proliferation. In conclusion, during learning, similar to the selective stabilization process, neuronal networks are sculpted by a tightly regulated selection and suppression of different populations of newly born neurons.

  17. Hemispheric asymmetry in new neurons in adulthood is associated with vocal learning and auditory memory.

    Directory of Open Access Journals (Sweden)

    Shuk C Tsoi

    Full Text Available Many brain regions exhibit lateral differences in structure and function, and also incorporate new neurons in adulthood, thought to function in learning and in the formation of new memories. However, the contribution of new neurons to hemispheric differences in processing is unknown. The present study combines cellular, behavioral, and physiological methods to address whether 1 new neuron incorporation differs between the brain hemispheres, and 2 the degree to which hemispheric lateralization of new neurons correlates with behavioral and physiological measures of learning and memory. The songbird provides a model system for assessing the contribution of new neurons to hemispheric specialization because songbird brain areas for vocal processing are functionally lateralized and receive a continuous influx of new neurons in adulthood. In adult male zebra finches, we quantified new neurons in the caudomedial nidopallium (NCM, a forebrain area involved in discrimination and memory for the complex vocalizations of individual conspecifics. We assessed song learning and recorded neural responses to song in NCM. We found significantly more new neurons labeled in left than in right NCM; moreover, the degree of asymmetry in new neuron numbers was correlated with the quality of song learning and strength of neuronal memory for recently heard songs. In birds with experimentally impaired song quality, the hemispheric difference in new neurons was diminished. These results suggest that new neurons may contribute to an allocation of function between the hemispheres that underlies the learning and processing of complex signals.

  18. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON.

    Science.gov (United States)

    Lytton, William W; Seidenstein, Alexandra H; Dura-Bernal, Salvador; McDougal, Robert A; Schürmann, Felix; Hines, Michael L

    2016-10-01

    Large multiscale neuronal network simulations are of increasing value as more big data are gathered about brain wiring and organization under the auspices of a current major research initiative, such as Brain Research through Advancing Innovative Neurotechnologies. The development of these models requires new simulation technologies. We describe here the current use of the NEURON simulator with message passing interface (MPI) for simulation in the domain of moderately large networks on commonly available high-performance computers (HPCs). We discuss the basic layout of such simulations, including the methods of simulation setup, the run-time spike-passing paradigm, and postsimulation data storage and data management approaches. Using the Neuroscience Gateway, a portal for computational neuroscience that provides access to large HPCs, we benchmark simulations of neuronal networks of different sizes (500-100,000 cells), and using different numbers of nodes (1-256). We compare three types of networks, composed of either Izhikevich integrate-and-fire neurons (I&F), single-compartment Hodgkin-Huxley (HH) cells, or a hybrid network with half of each. Results show simulation run time increased approximately linearly with network size and decreased almost linearly with the number of nodes. Networks with I&F neurons were faster than HH networks, although differences were small since all tested cells were point neurons with a single compartment.

  19. The long-term structural plasticity of cerebellar parallel fiber axons and its modulation by motor learning.

    Science.gov (United States)

    Carrillo, Jennifer; Cheng, Shao-Ying; Ko, Kwang Woo; Jones, Theresa A; Nishiyama, Hiroshi

    2013-05-08

    Presynaptic axonal varicosities, like postsynaptic spines, are dynamically added and eliminated even in mature neuronal circuitry. To study the role of this axonal structural plasticity in behavioral learning, we performed two-photon in vivo imaging of cerebellar parallel fibers (PFs) in adult mice. PFs make excitatory synapses on Purkinje cells (PCs) in the cerebellar cortex, and long-term potentiation and depression at PF-PC synapses are thought to play crucial roles in cerebellar-dependent learning. Time-lapse vital imaging of PFs revealed that, under a control condition (no behavioral training), ∼10% of PF varicosities appeared and disappeared over a period of 2 weeks without changing the total number of varicosities. The fraction of dynamic PF varicosities significantly diminished during training on an acrobatic motor skill learning task, largely because of reduced addition of new varicosities. Thus, this form of motor learning was associated with greater structural stability of PFs and a slight decrease in the total number of varicosities. Together with prior findings that the number of PF-PC synapses increases during similar training, our results suggest that acrobatic motor skill learning involves a reduction of some PF inputs and a strengthening of others, probably via the conversion of some preexisting PF varicosities into multisynaptic terminals.

  20. Dutch Lifelong learning : A Policy Perspective bringing together parallel Worlds

    NARCIS (Netherlands)

    van Dellen, Teije; Klercq, Jumbo; Buiskool, Bert-Jan

    2016-01-01

    Lifelong learning has never been an integral part of the Dutch educational culture. Nevertheless, nowadays yearly many adults (about 17.8% in 2015) are after either or not finishing initial education in some respect emergently participating in (continuing) second, third or more learning paths throug

  1. The Languages of Neurons: An Analysis of Coding Mechanisms by Which Neurons Communicate, Learn and Store Information

    Directory of Open Access Journals (Sweden)

    Morris H. Baslow

    2009-11-01

    Full Text Available In this paper evidence is provided that individual neurons possess language, and that the basic unit for communication consists of two neurons and their entire field of interacting dendritic and synaptic connections. While information processing in the brain is highly complex, each neuron uses a simple mechanism for transmitting information. This is in the form of temporal electrophysiological action potentials or spikes (S operating on a millisecond timescale that, along with pauses (P between spikes constitute a two letter “alphabet” that generates meaningful frequency-encoded signals or neuronal S/P “words” in a primary language. However, when a word from an afferent neuron enters the dendritic-synaptic-dendritic field between two neurons, it is translated into a new frequency-encoded word with the same meaning, but in a different spike-pause language, that is delivered to and understood by the efferent neuron. It is suggested that this unidirectional inter-neuronal language-based word translation step is of utmost importance to brain function in that it allows for variations in meaning to occur. Thus, structural or biochemical changes in dendrites or synapses can produce novel words in the second language that have changed meanings, allowing for a specific signaling experience, either external or internal, to modify the meaning of an original word (learning, and store the learned information of that experience (memory in the form of an altered dendritic-synaptic-dendritic field.

  2. Parallelizing Ant Colony Optimization via Area of Expertise Learning

    Science.gov (United States)

    2007-09-13

    lutions for all but the most trivial instances. Ant colony optimization (ACO) is a simple metaheuristic that can effectively solve problems in these...expertise” technique is applied to two problem domains: gridworld and the traveling salesman problem. 1.1 Motivation ACO is a metaheuristic that generates...independent ant agents, an obvious extension of the ant colony framework is to implement the algorithm in a parallel environment. One of the main

  3. On learning time delays between the spikes from different input neurons in a biophysical model of a pyramidal neuron.

    Science.gov (United States)

    Koutsou, Achilleas; Bugmann, Guido; Christodoulou, Chris

    2015-10-01

    Biological systems are able to recognise temporal sequences of stimuli or compute in the temporal domain. In this paper we are exploring whether a biophysical model of a pyramidal neuron can detect and learn systematic time delays between the spikes from different input neurons. In particular, we investigate whether it is possible to reinforce pairs of synapses separated by a dendritic propagation time delay corresponding to the arrival time difference of two spikes from two different input neurons. We examine two subthreshold learning approaches where the first relies on the backpropagation of EPSPs (excitatory postsynaptic potentials) and the second on the backpropagation of a somatic action potential, whose production is supported by a learning-enabling background current. The first approach does not provide a learning signal that sufficiently differentiates between synapses at different locations, while in the second approach, somatic spikes do not provide a reliable signal distinguishing arrival time differences of the order of the dendritic propagation time. It appears that the firing of pyramidal neurons shows little sensitivity to heterosynaptic spike arrival time differences of several milliseconds. This neuron is therefore unlikely to be able to learn to detect such differences.

  4. The Drive-Reinforcement Neuronal Model: A Real-Time Learning Mechanism For Unsupervised Learning

    Science.gov (United States)

    Klopf, A. H.

    1988-05-01

    The drive-reinforcement neuronal model is described as an example of a newly discovered class of real-time learning mechanisms that correlate earlier derivatives of inputs with later derivatives of outputs. The drive-reinforcement neuronal model has been demonstrated to predict a wide range of classical conditioning phenomena in animal learning. A variety of classes of connectionist and neural network models have been investigated in recent years (Hinton and Anderson, 1981; Levine, 1983; Barto, 1985; Feldman, 1985; Rumelhart and McClelland, 1986). After a brief review of these models, discussion will focus on the class of real-time models because they appear to be making the strongest contact with the experimental evidence of animal learning. Theoretical models in physics have inspired Boltzmann machines (Ackley, Hinton, and Sejnowski, 1985) and what are sometimes called Hopfield networks (Hopfield, 1982; Hopfield and Tank, 1986). These connectionist models utilize symmetric connections and adaptive equilibrium processes during which the networks settle into minimal energy states. Networks utilizing error-correction learning mechanisms go back to Rosenblatt's (1962) perception and Widrow's (1962) adaline and currently take the form of back propagation networks (Parker, 1985; Rumelhart, Hinton, and Williams, 1985, 1986). These networks require a "teacher" or "trainer" to provide error signals indicating the difference between desired and actual responses. Networks employing real-time learning mechanisms, in which the temporal association of signals is of fundamental importance, go back to Hebb (1949). Real-time learning mechanisms may require no teacher or trainer and thus may lend themselves to unsupervised learning. Such models have been extended by Klopf (1972, 1982), who introduced the notions of synaptic eligibility and generalized reinforcement. Sutton and Barto (1981) advanced this class of models by proposing that a derivative of the theoretical neuron's out

  5. Repeated Stimulation of Cultured Networks of Rat Cortical Neurons Induces Parallel Memory Traces

    Science.gov (United States)

    le Feber, Joost; Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and…

  6. Context Fear Learning Specifically Activates Distinct Populations of Neurons in Amygdala and Hypothalamus

    Science.gov (United States)

    Trogrlic, Lidia; Wilson, Yvette M.; Newman, Andrew G.; Murphy, Mark

    2011-01-01

    The identity and distribution of neurons that are involved in any learning or memory event is not known. In previous studies, we identified a discrete population of neurons in the lateral amygdala that show learning-specific activation of a c-"fos"-regulated transgene following context fear conditioning. Here, we have extended these studies to…

  7. Parallel Alterations of Functional Connectivity during Execution and Imagination after Motor Imagery Learning

    Science.gov (United States)

    Zhang, Rushao; Hui, Mingqi; Long, Zhiying; Zhao, Xiaojie; Yao, Li

    2012-01-01

    Background Neural substrates underlying motor learning have been widely investigated with neuroimaging technologies. Investigations have illustrated the critical regions of motor learning and further revealed parallel alterations of functional activation during imagination and execution after learning. However, little is known about the functional connectivity associated with motor learning, especially motor imagery learning, although benefits from functional connectivity analysis attract more attention to the related explorations. We explored whether motor imagery (MI) and motor execution (ME) shared parallel alterations of functional connectivity after MI learning. Methodology/Principal Findings Graph theory analysis, which is widely used in functional connectivity exploration, was performed on the functional magnetic resonance imaging (fMRI) data of MI and ME tasks before and after 14 days of consecutive MI learning. The control group had no learning. Two measures, connectivity degree and interregional connectivity, were calculated and further assessed at a statistical level. Two interesting results were obtained: (1) The connectivity degree of the right posterior parietal lobe decreased in both MI and ME tasks after MI learning in the experimental group; (2) The parallel alterations of interregional connectivity related to the right posterior parietal lobe occurred in the supplementary motor area for both tasks. Conclusions/Significance These computational results may provide the following insights: (1) The establishment of motor schema through MI learning may induce the significant decrease of connectivity degree in the posterior parietal lobe; (2) The decreased interregional connectivity between the supplementary motor area and the right posterior parietal lobe in post-test implicates the dissociation between motor learning and task performing. These findings and explanations further revealed the neural substrates underpinning MI learning and supported that

  8. Motor and perceptual sequence learning: different time course of parallel processes.

    Science.gov (United States)

    Dirnberger, Georg; Novak-Knollmueller, Judith

    2013-07-10

    The aim was to determine the extent and time course of motor and perceptual learning in a procedural learning task, and the relation of these two processes. Because environmental constraints modulate the relative impact of different learning mechanisms, we chose a simple learning task similar to real-life exercise. Thirty-four healthy individuals performed a visuomotor serial reaction time task. Learning blocks with high stimulus-response compatibility were practiced repeatedly; in between these, participants performed test blocks with the same or a different (mirror-inverted, or new) stimulus sequence and/or with the same or a different (mirror-inverted) stimulus-response allocation. This design allowed us to measure the progress of motor learning and perceptual learning independently. Results showed that in the learning blocks, a steady reduction of the reaction times indicated that - as expected - participants improved their skills continuously. Analysis of the test blocks indicated that both motor learning and perceptual learning were significant. The two mechanisms were correlated (r=0.62, Pperceptual learning was more stable but slower. In conclusion, in a simple visuomotor learning task, participants can learn the motor sequence and the stimulus sequence in parallel. The positive correlation of motor and perceptual learning suggests that the two mechanisms act in synergy and are not alternative opposing strategies. The impact of these two learning mechanisms changes over time: motor learning sets in later and becomes relevant only in the course of training.

  9. Shifts in sensory neuron identity parallel differences in pheromone preference in the European corn borer

    Directory of Open Access Journals (Sweden)

    Fotini A Koutroumpa

    2014-10-01

    Full Text Available Pheromone communication relies on highly specific signals sent and received between members of the same species. However, how pheromone specificity is determined in moth olfactory circuits remains unknown. Here we provide the first glimpse into the mechanism that generates this specificity in Ostrinia nubilalis. In Ostrinia nubilalis it was found that a single locus causes strain-specific, diametrically opposed preferences for a 2-component pheromone blend. Previously we found pheromone preference to be correlated with the strain and hybrid-specific relative antennal response to both pheromone components. This led to the current study, in which we detail the underlying mechanism of this differential response, through chemotopically mapping of the pheromone detection circuit in the antenna. We determined that both strains and their hybrids have swapped the neuronal identity of the pheromone-sensitive neurons co-housed within a single sensillum. Furthermore, neurons that mediate behavioral antagonism surprisingly co-express up to five pheromone receptors, mirroring the concordantly broad tuning to heterospecific pheromones. This appears as possible evolutionary adaptation that could prevent cross attraction to a range of heterospecific signals, while keeping the pheromone detection system to its simplest tripartite setup.

  10. Integer-encoded massively parallel processing of fast-learning fuzzy ARTMAP neural networks

    Science.gov (United States)

    Bahr, Hubert A.; DeMara, Ronald F.; Georgiopoulos, Michael

    1997-04-01

    In this paper we develop techniques that are suitable for the parallel implementation of Fuzzy ARTMAP networks. Speedup and learning performance results are provided for execution on a DECmpp/Sx-1208 parallel processor consisting of a DEC RISC Workstation Front-End and MasPar MP-1 Back-End with 8,192 processors. Experiments of the parallel implementation were conducted on the Letters benchmark database developed by Frey and Slate. The results indicate a speedup on the order of 1000-fold which allows combined training and testing time of under four minutes.

  11. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    Science.gov (United States)

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  12. The chronotron: a neuron that learns to fire temporally precise spike patterns.

    Directory of Open Access Journals (Sweden)

    Răzvan V Florian

    Full Text Available In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons, one that provides high memory capacity (E-learning, and one that has a higher biological plausibility (I-learning. With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

  13. PARALLEL SELF-ORGANIZING MAP

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    A new self-organizing map, parallel self-organizing map (PSOM), was proposed for information parallel processing purpose. In this model, there are two separate layers of neurons connected together,the number of neurons in both layer and connections between them is equal to the number of total elements of input signals, the weight updating is managed through a sequence of operations among some unitary transformation and operation matrixes, so the conventional repeated learning procedure was modified to learn just once and an algorithm was developed to realize this new learning method. With a typical classification example, the performance of PSOM demonstrated convergence results similar to Kohonen's model. Theoretic analysis and proofs also showed some interesting properties of PSOM. As it was pointed out, the contribution of such a network may not be so significant, but its parallel mode may be interesting for quantum computation.

  14. Allopregnanolone-induced rise in intracellular calcium in embryonic hippocampal neurons parallels their proliferative potential

    Directory of Open Access Journals (Sweden)

    Brinton Roberta

    2008-12-01

    Full Text Available Abstract Background Factors that regulate intracellular calcium concentration are known to play a critical role in brain function and neural development, including neural plasticity and neurogenesis. We previously demonstrated that the neurosteroid allopregnanolone (APα; 5α-pregnan-3α-ol-20-one promotes neural progenitor proliferation in vitro in cultures of rodent hippocampal and human cortical neural progenitors, and in vivo in triple transgenic Alzheimer's disease mice dentate gyrus. We also found that APα-induced proliferation of neural progenitors is abolished by a calcium channel blocker, nifedipine, indicating a calcium dependent mechanism for the proliferation. Methods In the present study, we investigated the effect of APα on the regulation of intracellular calcium concentration in E18 rat hippocampal neurons using ratiometric Fura2-AM imaging. Results Results indicate that APα rapidly increased intracellular calcium concentration in a dose-dependent and developmentally regulated manner, with an EC50 of 110 ± 15 nM and a maximal response occurring at three days in vitro. The stereoisomers 3β-hydroxy-5α-hydroxy-pregnan-20-one, and 3β-hydroxy-5β-hydroxy-pregnan-20-one, as well as progesterone, were without significant effect. APα-induced intracellular calcium concentration increase was not observed in calcium depleted medium and was blocked in the presence of the broad spectrum calcium channel blocker La3+, or the L-type calcium channel blocker nifedipine. Furthermore, the GABAA receptor blockers bicuculline and picrotoxin abolished APα-induced intracellular calcium concentration rise. Conclusion Collectively, these data indicate that APα promotes a rapid, dose-dependent, stereo-specific, and developmentally regulated increase of intracellular calcium concentration in rat embryonic hippocampal neurons via a mechanism that requires both the GABAA receptor and L-type calcium channel. These data suggest that AP

  15. Weaving the (neuronal) web: fear learning in spider phobia.

    Science.gov (United States)

    Schweckendiek, Jan; Klucken, Tim; Merz, Christian J; Tabbert, Katharina; Walter, Bertram; Ambach, Wolfgang; Vaitl, Dieter; Stark, Rudolf

    2011-01-01

    Theories of specific phobias consider classical conditioning as a central mechanism in the pathogenesis and maintenance of the disorder. Although the neuronal network underlying human fear conditioning is understood in considerable detail, no study to date has examined the neuronal correlates of fear conditioning directly in patients with specific phobias. Using functional magnet resonance imaging (fMRI) we investigated conditioned responses using phobia-relevant and non-phobia-relevant unconditioned stimuli in patients with specific phobias (n=15) and healthy controls (n=14) by means of a differential picture-picture conditioning paradigm: three neutral geometric figures (conditioned stimuli) were followed by either pictures of spiders, highly aversive scenes or household items (unconditioned stimuli), respectively. Enhanced activations within the fear network (medial prefrontal cortex, anterior cingulate cortex, amygdala, insula and thalamus) were observed in response to the phobia-related conditioned stimulus. Further, spider phobic subjects displayed higher amygdala activation in response to the phobia-related conditioned stimulus than to the non-phobia-related conditioned stimulus. Moreover, no differences between patients and healthy controls emerged regarding the non-phobia-related conditioned stimulus. The results imply that learned phobic fear is based on exaggerated responses in structures belonging to the fear network and emphasize the importance of the amygdala in the processing of phobic fear. Further, altered responding of the fear network in patients was only observed in response to the phobia-related conditioned stimulus but not to the non-phobia-related conditioned stimulus indicating no differences in general conditionability between patients with specific phobias and healthy controls. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. The Mirror Neuron System and Observational Learning: Implications for the Effectiveness of Dynamic Visualizations

    Science.gov (United States)

    van Gog, Tamara; Paas, Fred; Marcus, Nadine; Ayres, Paul; Sweller, John

    2009-01-01

    Learning by observing and imitating others has long been recognized as constituting a powerful learning strategy for humans. Recent findings from neuroscience research, more specifically on the mirror neuron system, begin to provide insight into the neural bases of learning by observation and imitation. These findings are discussed here, along…

  17. Reconciling genetic evolution and the associative learning account of mirror neurons through data-acquisition mechanisms.

    Science.gov (United States)

    Lotem, Arnon; Kolodny, Oren

    2014-04-01

    An associative learning account of mirror neurons should not preclude genetic evolution of its underlying mechanisms. On the contrary, an associative learning framework for cognitive development should seek heritable variation in the learning rules and in the data-acquisition mechanisms that construct associative networks, demonstrating how small genetic modifications of associative elements can give rise to the evolution of complex cognition.

  18. Learning Enhances Intrinsic Excitability in a Subset of Lateral Amygdala Neurons

    Science.gov (United States)

    Sehgal, Megha; Ehlers, Vanessa L.; Moyer, James R., Jr.

    2014-01-01

    Learning-induced modulation of neuronal intrinsic excitability is a metaplasticity mechanism that can impact the acquisition of new memories. Although the amygdala is important for emotional learning and other behaviors, including fear and anxiety, whether learning alters intrinsic excitability within the amygdala has received very little…

  19. The Mirror Neuron System and Observational Learning: Implications for the Effectiveness of Dynamic Visualizations

    Science.gov (United States)

    van Gog, Tamara; Paas, Fred; Marcus, Nadine; Ayres, Paul; Sweller, John

    2009-01-01

    Learning by observing and imitating others has long been recognized as constituting a powerful learning strategy for humans. Recent findings from neuroscience research, more specifically on the mirror neuron system, begin to provide insight into the neural bases of learning by observation and imitation. These findings are discussed here, along…

  20. Learning Enhances Intrinsic Excitability in a Subset of Lateral Amygdala Neurons

    Science.gov (United States)

    Sehgal, Megha; Ehlers, Vanessa L.; Moyer, James R., Jr.

    2014-01-01

    Learning-induced modulation of neuronal intrinsic excitability is a metaplasticity mechanism that can impact the acquisition of new memories. Although the amygdala is important for emotional learning and other behaviors, including fear and anxiety, whether learning alters intrinsic excitability within the amygdala has received very little…

  1. Statistical speech segmentation and word learning in parallel: scaffolding from child-directed speech

    Directory of Open Access Journals (Sweden)

    Daniel eYurovsky

    2012-10-01

    Full Text Available In order to acquire their native languages, children must learn richly structured systems with regularities at multiple levels. While structure at different levels could be learned serially, e.g. speech segmentation coming before word-object mapping, redundancies across levels make parallel learning more efficient. For instance, a series of syllables is likely to be a word not only because of high transitional probabilities, but also because of a consistently co-occurring object. But additional statistics require additional processing, and thus might not be useful to cognitively constrained learners. We show that the structure of child-directed speech makes this problem solvable for human learners. First, a corpus of child-directed speech was recorded from parents and children engaged in a naturalistic free-play task. Analyses revealed two consistent regularities in the sentence structure of naming events. These regularities were subsequently encoded in an artificial language to which adult participants were exposed in the context of simultaneous statistical speech segmentation and word learning. Either regularity was sufficient to support successful learning, but no learning occurred in the absence of both regularities. Thus, the structure of child-directed speech plays an important role in scaffolding speech segmentation and word learning in parallel.

  2. Parallel encoding of sensory history and behavioral preference during Caenorhabditis elegans olfactory learning

    Science.gov (United States)

    Cho, Christine E; Brueggemann, Chantal; L'Etoile, Noelle D; Bargmann, Cornelia I

    2016-01-01

    Sensory experience modifies behavior through both associative and non-associative learning. In Caenorhabditis elegans, pairing odor with food deprivation results in aversive olfactory learning, and pairing odor with food results in appetitive learning. Aversive learning requires nuclear translocation of the cGMP-dependent protein kinase EGL-4 in AWC olfactory neurons and an insulin signal from AIA interneurons. Here we show that the activity of neurons including AIA is acutely required during aversive, but not appetitive, learning. The AIA circuit and AGE-1, an insulin-regulated PI3 kinase, signal to AWC to drive nuclear enrichment of EGL-4 during conditioning. Odor exposure shifts the AWC dynamic range to higher odor concentrations regardless of food pairing or the AIA circuit, whereas AWC coupling to motor circuits is oppositely regulated by aversive and appetitive learning. These results suggest that non-associative sensory adaptation in AWC encodes odor history, while associative behavioral preference is encoded by altered AWC synaptic activity. DOI: http://dx.doi.org/10.7554/eLife.14000.001 PMID:27383131

  3. Olfactory-learning abilities are correlated with the rate by which intrinsic neuronal excitability is modulated in the piriform cortex.

    Science.gov (United States)

    Cohen-Matsliah, Sivan I; Rosenblum, Kobi; Barkai, Edi

    2009-10-01

    Long-lasting modulation of intrinsic neuronal excitability in cortical neurons underlies distinct stages of skill learning. However, whether individual differences in learning capabilities are dependent on the rate by which such learning-induced modifications occur has yet to be explored. Here we show that training rats in a simple olfactory-discrimination task results in the same enhanced excitability in piriform cortex neurons as previously shown after training in a much more complex olfactory-discrimination task. Based on their learning capabilities in the simple task, rats could be divided to two groups: fast performers and slow performers. The rate at which rats accomplished the skill to perform the simple task was correlated with the time course at which piriform cortex neurons increased their repetitive spike firing. Twelve hours after learning, neurons from fast performers had reduced spike frequency adaptation as compared with neurons from slow performers and controls. Three days after learning, spike frequency adaptation was reduced in neurons from SP, while neurons from fast performers increased their spike firing adaptation to the level of controls. Accordingly, the post-burst AHP was reduced in neurons from fast performers 12 h after learning and in neurons from slow performers 3 days after learning. Moreover, the differences in learning capabilities between fast performers and slow performers were maintained when examined in a different, complex olfactory-discrimination task. We suggest that the rate at which neuronal excitability is modified during learning may affect the behavioral flexibility of the animal.

  4. The R package "sperrorest" : Parallelized spatial error estimation and variable importance assessment for geospatial machine learning

    Science.gov (United States)

    Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander

    2017-04-01

    Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the

  5. Learning to like disgust: Neuronal correlates of counterconditioning

    Directory of Open Access Journals (Sweden)

    Jan eSchweckendiek

    2013-07-01

    Full Text Available Converging lines of research suggest that exaggerated disgust responses play a crucial role in the development and maintenance of certain anxiety disorders. One strategy that might effectively alter disgust responses is counterconditioning. In this study, we used functional magnetic resonance imaging (fMRI to examine if the neuronal bases of disgust responses are altered through a counterconditioning procedure. One disgust picture (conditioned stimulus: CS+disg announced a monetary reward, while a second disgust picture (CS-disg was never paired with the reward. Two neutral control pictures (CS+con / CS-con were conditioned in the same manner. Analyses of evaluative conditioning showed that both CS+ were rated significantly more positive after conditioning as compared to the corresponding CS-. Thereby, the CS+disg and the CS+con received an equal increase in valence ratings. Regarding the fMRI data, ANOVA results showed main effects of the conditioning procedure (i.e. CS+ vs. CS- in the dorsal anterior cingulate cortex. Further, main effects of the picture category (disgust vs. control were found in the bilateral insula and the orbitofrontal cortex. No interaction effects were detected. In conclusion, the results imply that learning and anticipation of reward was not significantly influenced by the disgust content of the CS pictures. This suggests that the affect induced by the disgust pictures and the affect created by the anticipation of reward may not influence the processing of each other.

  6. Genetic approaches to the molecular/neuronal mechanisms underlying learning and memory in the mouse.

    Science.gov (United States)

    Nakajima, Akira; Tang, Ya-Ping

    2005-09-01

    Learning and memory is an essential component of human intelligence. To understand its underlying molecular and neuronal mechanisms is currently an extensive focus in the field of cognitive neuroscience. We have employed advanced mouse genetic approaches to analyze the molecular and neuronal bases for learning and memory, and our results showed that brain region-specific genetic manipulations (including transgenic and knockout), inducible/reversible knockout, genetic/chemical kinase inactivation, and neuronal-based genetic approach are very powerful tools for studying the involvements of various molecules or neuronal substrates in the processes of learning and memory. Studies using these techniques may eventually lead to the understanding of how new information is acquired and how learned information is memorized in the brain.

  7. Direct and crossed effects of somatosensory electrical stimulation on motor learning and neuronal plasticity in humans

    NARCIS (Netherlands)

    Veldman, M. P.; Zijdewind, I.; Solnik, S.; Maffiuletti, N. A.; Berghuis, K. M. M.; Javet, M.; Negyesi, J.; Hortobagyi, T.

    2015-01-01

    Purpose Sensory input can modify voluntary motor function. We examined whether somatosensory electrical stimulation (SES) added to motor practice (MP) could augment motor learning, interlimb transfer, and whether physiological changes in neuronal excitability underlie these changes. Methods Particip

  8. CAMKII activation is not required for maintenance of learning-induced enhancement of neuronal excitability.

    Directory of Open Access Journals (Sweden)

    Ori Liraz

    Full Text Available Pyramidal neurons in the piriform cortex from olfactory-discrimination trained rats show enhanced intrinsic neuronal excitability that lasts for several days after learning. Such enhanced intrinsic excitability is mediated by long-term reduction in the post-burst after-hyperpolarization (AHP which is generated by repetitive spike firing. AHP reduction is due to decreased conductance of a calcium-dependent potassium current, the sI(AHP. We have previously shown that learning-induced AHP reduction is maintained by persistent protein kinase C (PKC and extracellular regulated kinase (ERK activation. However, the molecular machinery underlying this long-lasting modulation of intrinsic excitability is yet to be fully described. Here we examine whether the CaMKII, which is known to be crucial in learning, memory and synaptic plasticity processes, is instrumental for the maintenance of learning-induced AHP reduction. KN93, that selectively blocks CaMKII autophosphorylation at Thr286, reduced the AHP in neurons from trained and control rat to the same extent. Consequently, the differences in AHP amplitude and neuronal adaptation between neurons from trained rats and controls remained. Accordingly, the level of activated CaMKII was similar in pirifrom cortex samples taken form trained and control rats. Our data show that although CaMKII modulates the amplitude of AHP of pyramidal neurons in the piriform cortex, its activation is not required for maintaining learning-induced enhancement of neuronal excitability.

  9. CAMKII activation is not required for maintenance of learning-induced enhancement of neuronal excitability.

    Science.gov (United States)

    Liraz, Ori; Rosenblum, Kobi; Barkai, Edi

    2009-01-01

    Pyramidal neurons in the piriform cortex from olfactory-discrimination trained rats show enhanced intrinsic neuronal excitability that lasts for several days after learning. Such enhanced intrinsic excitability is mediated by long-term reduction in the post-burst after-hyperpolarization (AHP) which is generated by repetitive spike firing. AHP reduction is due to decreased conductance of a calcium-dependent potassium current, the sI(AHP). We have previously shown that learning-induced AHP reduction is maintained by persistent protein kinase C (PKC) and extracellular regulated kinase (ERK) activation. However, the molecular machinery underlying this long-lasting modulation of intrinsic excitability is yet to be fully described. Here we examine whether the CaMKII, which is known to be crucial in learning, memory and synaptic plasticity processes, is instrumental for the maintenance of learning-induced AHP reduction. KN93, that selectively blocks CaMKII autophosphorylation at Thr286, reduced the AHP in neurons from trained and control rat to the same extent. Consequently, the differences in AHP amplitude and neuronal adaptation between neurons from trained rats and controls remained. Accordingly, the level of activated CaMKII was similar in pirifrom cortex samples taken form trained and control rats. Our data show that although CaMKII modulates the amplitude of AHP of pyramidal neurons in the piriform cortex, its activation is not required for maintaining learning-induced enhancement of neuronal excitability.

  10. Habit Learning by Naive Macaques Is Marked by Response Sharpening of Striatal Neurons Representing the Cost and Outcome of Acquired Action Sequences.

    Science.gov (United States)

    Desrochers, Theresa M; Amemori, Ken-ichi; Graybiel, Ann M

    2015-08-19

    Over a century of scientific work has focused on defining the factors motivating behavioral learning. Observations in animals and humans trained on a wide range of tasks support reinforcement learning (RL) algorithms as accounting for the learning. Still unknown, however, are the signals that drive learning in naive, untrained subjects. Here, we capitalized on a sequential saccade task in which macaque monkeys acquired repetitive scanning sequences without instruction. We found that spike activity in the caudate nucleus after each trial corresponded to an integrated cost-benefit signal that was highly correlated with the degree of naturalistic untutored learning by the monkeys. Across learning, neurons encoding both cost and outcome gradually acquired increasingly sharp phasic trial-end responses that paralleled the development of the habit-like, repetitive saccade sequences. Our findings demonstrate an integrated cost-benefit signal by which RL and its neural correlates could drive naturalistic behaviors in freely behaving primates.

  11. A neuronal learning rule for sub-millisecond temporal coding

    OpenAIRE

    Gerstner, W.; Kempter, R.; van Hemmen, J. Leo; Wagner, H.

    1996-01-01

    An unresolved paradox exists in auditory and electrosensory neural systems {Carr93,Heiligenberg91}: they encode behaviourally relevant signals in the range of a few microseconds with neurons that are at least one order of magnitude slower. We take the barn owl's auditory system as an example and present a modeling study based on computer simulations of a neuron in the laminar nucleus. Three observations resolve the paradox. First, spiking of an integrate-and-fire neuron driven by excitatory p...

  12. A neuron model with trainable activation function (TAF) and its MFNN supervised learning

    Institute of Scientific and Technical Information of China (English)

    吴佑寿; 赵明生

    2001-01-01

    This paper addresses a new kind of neuron model, which has trainable activation function (TAF) in addition to only trainable weights in the conventional M-P model. The final neuron activation function can be derived from a primitive neuron activation function by training. The BP like learning algorithm has been presented for MFNN constructed by neurons of TAF model. Several simulation examples are given to show the network capacity and performance advantages of the new MFNN in comparison with that of conventional sigmoid MFNN.

  13. Asynchronous cellular automaton-based neuron: theoretical analysis and on-FPGA learning.

    Science.gov (United States)

    Matsubara, Takashi; Torikai, Hiroyuki

    2013-05-01

    A generalized asynchronous cellular automaton-based neuron model is a special kind of cellular automaton that is designed to mimic the nonlinear dynamics of neurons. The model can be implemented as an asynchronous sequential logic circuit and its control parameter is the pattern of wires among the circuit elements that is adjustable after implementation in a field-programmable gate array (FPGA) device. In this paper, a novel theoretical analysis method for the model is presented. Using this method, stabilities of neuron-like orbits and occurrence mechanisms of neuron-like bifurcations of the model are clarified theoretically. Also, a novel learning algorithm for the model is presented. An equivalent experiment shows that an FPGA-implemented learning algorithm enables an FPGA-implemented model to automatically reproduce typical nonlinear responses and occurrence mechanisms observed in biological and model neurons.

  14. Maximization of learning speed in the motor cortex due to neuronal redundancy.

    Directory of Open Access Journals (Sweden)

    Ken Takiyama

    2012-01-01

    Full Text Available Many redundancies play functional roles in motor control and motor learning. For example, kinematic and muscle redundancies contribute to stabilizing posture and impedance control, respectively. Another redundancy is the number of neurons themselves; there are overwhelmingly more neurons than muscles, and many combinations of neural activation can generate identical muscle activity. The functional roles of this neuronal redundancy remains unknown. Analysis of a redundant neural network model makes it possible to investigate these functional roles while varying the number of model neurons and holding constant the number of output units. Our analysis reveals that learning speed reaches its maximum value if and only if the model includes sufficient neuronal redundancy. This analytical result does not depend on whether the distribution of the preferred direction is uniform or a skewed bimodal, both of which have been reported in neurophysiological studies. Neuronal redundancy maximizes learning speed, even if the neural network model includes recurrent connections, a nonlinear activation function, or nonlinear muscle units. Furthermore, our results do not rely on the shape of the generalization function. The results of this study suggest that one of the functional roles of neuronal redundancy is to maximize learning speed.

  15. Direct Neuronal Reprogramming for Disease Modeling Studies Using Patient-Derived Neurons: What Have We Learned?

    Directory of Open Access Journals (Sweden)

    Janelle Drouin-Ouellet

    2017-09-01

    Full Text Available Direct neuronal reprogramming, by which a neuron is formed via direct conversion from a somatic cell without going through a pluripotent intermediate stage, allows for the possibility of generating patient-derived neurons. A unique feature of these so-called induced neurons (iNs is the potential to maintain aging and epigenetic signatures of the donor, which is critical given that many diseases of the CNS are age related. Here, we review the published literature on the work that has been undertaken using iNs to model human brain disorders. Furthermore, as disease-modeling studies using this direct neuronal reprogramming approach are becoming more widely adopted, it is important to assess the criteria that are used to characterize the iNs, especially in relation to the extent to which they are mature adult neurons. In particular: i what constitutes an iN cell, ii which stages of conversion offer the earliest/optimal time to assess features that are specific to neurons and/or a disorder and iii whether generating subtype-specific iNs is critical to the disease-related features that iNs express. Finally, we discuss the range of potential biomedical applications that can be explored using patient-specific models of neurological disorders with iNs, and the challenges that will need to be overcome in order to realize these applications.

  16. [The effects of SO2 on electric activity learning and memory of rat hippocampal neurons].

    Science.gov (United States)

    Liu, Xiaoli; Yang, Dongsheng; Meng, Ziqiang

    2008-11-01

    To study the toxicological mechanism of SO2 on central neural system by electrophysiological method. Male SD rats were housed in exposure chambers and treated at the concentration of 28 mg/m3 SO2 for 7 days (6h/d), while control rats were treated with filtered air in the same condition. Using glass micro-electrodes recording in vivo, the frequencies and numbers of spontaneous discharge in hippocampal CAI neurons were measured. Influences of the learning and memory functions were measured by setting up passive avoidance behavior reflex. SO2 decreased significantly the neurons spontaneous discharge frequency and prolonged the neurons spontaneous period in hippocampal CAl. SO2 significantly decreased the learning and memory function of rats. The results indicated that SO2 could be a neurotoxin. It could inhibit the hippocampal neurons excitability and affect the learning and memory function of rats.

  17. Hebbian learning and predictive mirror neurons for actions, sensations and emotions.

    Science.gov (United States)

    Keysers, Christian; Gazzola, Valeria

    2014-01-01

    Spike-timing-dependent plasticity is considered the neurophysiological basis of Hebbian learning and has been shown to be sensitive to both contingency and contiguity between pre- and postsynaptic activity. Here, we will examine how applying this Hebbian learning rule to a system of interconnected neurons in the presence of direct or indirect re-afference (e.g. seeing/hearing one's own actions) predicts the emergence of mirror neurons with predictive properties. In this framework, we analyse how mirror neurons become a dynamic system that performs active inferences about the actions of others and allows joint actions despite sensorimotor delays. We explore how this system performs a projection of the self onto others, with egocentric biases to contribute to mind-reading. Finally, we argue that Hebbian learning predicts mirror-like neurons for sensations and emotions and review evidence for the presence of such vicarious activations outside the motor system.

  18. Immunohistochemical visualization of hippocampal neuron activity after spatial learning in a mouse model of neurodevelopmental disorders.

    Science.gov (United States)

    Provenzano, Giovanni; Pangrazzi, Luca; Poli, Andrea; Berardi, Nicoletta; Bozzi, Yuri

    2015-05-12

    Induction of phosphorylated extracellular-regulated kinase (pERK) is a reliable molecular readout of learning-dependent neuronal activation. Here, we describe a pERK immunohistochemistry protocol to study the profile of hippocampal neuron activation following exposure to a spatial learning task in a mouse model characterized by cognitive deficits of neurodevelopmental origin. Specifically, we used pERK immunostaining to study neuronal activation following Morris water maze (MWM, a classical hippocampal-dependent learning task) in Engrailed-2 knockout (En2(-/-)) mice, a model of autism spectrum disorders (ASD). As compared to wild-type (WT) controls, En2(-/-) mice showed significant spatial learning deficits in the MWM. After MWM, significant differences in the number of pERK-positive neurons were detected in specific hippocampal subfields of En2(-/-) mice, as compared to WT animals. Thus, our protocol can robustly detect differences in pERK-positive neurons associated to hippocampal-dependent learning impairment in a mouse model of ASD. More generally, our protocol can be applied to investigate the profile of hippocampal neuron activation in both genetic or pharmacological mouse models characterized by cognitive deficits.

  19. Neuronal mechanisms of motor learning and motor memory consolidation in healthy old adults

    NARCIS (Netherlands)

    Berghuis, K. M. M.; Veldman, M. P.; Solnik, S.; Koch, G.; Zijdewind, I.; Hortobagyi, T.

    2015-01-01

    It is controversial whether or not old adults are capable of learning new motor skills and consolidate the performance gains into motor memory in the offline period. The underlying neuronal mechanisms are equally unclear. We determined the magnitude of motor learning and motor memory consolidation i

  20. VTA GABA neurons modulate specific learning behaviours through the control of dopamine and cholinergic systems

    Directory of Open Access Journals (Sweden)

    Meaghan C Creed

    2014-01-01

    Full Text Available The mesolimbic reward system is primarily comprised of the ventral tegmental area (VTA and the nucleus accumbens (NAc as well as their afferent and efferent connections. This circuitry is essential for learning about stimuli associated with motivationally-relevant outcomes. Moreover, addictive drugs affect and remodel this system, which may underlie their addictive properties. In addition to DA neurons, the VTA also contains approximately 30% ɣ-aminobutyric acid (GABA neurons. The task of signalling both rewarding and aversive events from the VTA to the NAc has mostly been ascribed to DA neurons and the role of GABA neurons has been largely neglected until recently. GABA neurons provide local inhibition of DA neurons and also long-range inhibition of projection regions, including the NAc. Here we review studies using a combination of in vivo and ex vivo electrophysiology, pharmacogenetic and optogenetic manipulations that have characterized the functional neuroanatomy of inhibitory circuits in the mesolimbic system, and describe how GABA neurons of the VTA regulate reward and aversion-related learning. We also discuss pharmacogenetic manipulation of this system with benzodiazepines (BDZs, a class of addictive drugs, which act directly on GABAA receptors located on GABA neurons of the VTA. The results gathered with each of these approaches suggest that VTA GABA neurons bi-directionally modulate activity of local DA neurons, underlying reward or aversion at the behavioural level. Conversely, long-range GABA projections from the VTA to the NAc selectively target cholinergic interneurons (CINs to pause their firing and temporarily reduce cholinergic tone in the NAc, which modulates associative learning. Further characterization of inhibitory circuit function within and beyond the VTA is needed in order to fully understand the function of the mesolimbic system under normal and pathological conditions.

  1. Spiking neurons can discover predictive features by aggregate-label learning.

    Science.gov (United States)

    Gütig, Robert

    2016-03-01

    The brain routinely discovers sensory clues that predict opportunities or dangers. However, it is unclear how neural learning processes can bridge the typically long delays between sensory clues and behavioral outcomes. Here, I introduce a learning concept, aggregate-label learning, that enables biologically plausible model neurons to solve this temporal credit assignment problem. Aggregate-label learning matches a neuron's number of output spikes to a feedback signal that is proportional to the number of clues but carries no information about their timing. Aggregate-label learning outperforms stochastic reinforcement learning at identifying predictive clues and is able to solve unsegmented speech-recognition tasks. Furthermore, it allows unsupervised neural networks to discover reoccurring constellations of sensory features even when they are widely dispersed across space and time.

  2. Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent

    Directory of Open Access Journals (Sweden)

    Zunyi Tang

    2013-01-01

    Full Text Available Sparse representation of signals via an overcomplete dictionary has recently received much attention as it has produced promising results in various applications. Since the nonnegativities of the signals and the dictionary are required in some applications, for example, multispectral data analysis, the conventional dictionary learning methods imposed simply with nonnegativity may become inapplicable. In this paper, we propose a novel method for learning a nonnegative, overcomplete dictionary for such a case. This is accomplished by posing the sparse representation of nonnegative signals as a problem of nonnegative matrix factorization (NMF with a sparsity constraint. By employing the coordinate descent strategy for optimization and extending it to multivariable case for processing in parallel, we develop a so-called parallel coordinate descent dictionary learning (PCDDL algorithm, which is structured by iteratively solving the two optimal problems, the learning process of the dictionary and the estimating process of the coefficients for constructing the signals. Numerical experiments demonstrate that the proposed algorithm performs better than the conventional nonnegative K-SVD (NN-KSVD algorithm and several other algorithms for comparison. What is more, its computational consumption is remarkably lower than that of the compared algorithms.

  3. Fully Parallel Self-Learning Analog Support Vector Machine Employing Compact Gaussian Generation Circuits

    Science.gov (United States)

    Zhang, Renyuan; Shibata, Tadashi

    2012-04-01

    An analog support vector machine (SVM) processor employing a fully parallel self-learning circuitry was developed for the classification of highly dimensional patterns. To implement a highly dimensional Gaussian function, which is the most powerful kernel function in classification algorithms but computationally expensive, a compact analog Gaussian generation circuit was developed. By employing this proposed Gaussian generation circuit, a fully parallel self-learning processor based on an SVM algorithm was built for 64 dimension pattern classification. The chip real estate occupied by the processor is very small. The object images from two classes were converted into 64 dimension vectors using the algorithm developed in a previous work and fed into the processor. The learning process autonomously proceeded without any clock-based control and self-converged within a single clock cycle of the system (at 10 MHz). Some test object images were used to verify the learning performance. According to the circuit simulation results, it was shown that all the test images were classified into correct classes in real time. A proof-of-concept chip was designed in a 0.18 µm complementary metal-oxide-semiconductor (CMOS) technology, and the performance of the proposed SVM processor was confirmed from the measurement results of the fabricated chips.

  4. Autism and the mirror neuron system: insights from learning and teaching

    OpenAIRE

    2014-01-01

    Individuals with autism have difficulties in social learning domains which typically involve mirror neuron system (MNS) activation. However, the precise role of the MNS in the development of autism and its relevance to treatment remain unclear. In this paper, we argue that three distinct aspects of social learning are critical for advancing knowledge in this area: (i) the mechanisms that allow for the implicit mapping of and learning from others' behaviour, (ii) the motivation to attend to an...

  5. Learning dynamics of a single polar variable complex-valued neuron.

    Science.gov (United States)

    Nitta, Tohru

    2015-05-01

    This letter investigates the characteristics of the complex-valued neuron model with parameters represented by polar coordinates (called polar variable complex-valued neuron). The parameters of the polar variable complex-valued neuron are unidentifiable. The plateau phenomenon can occur during learning of the polar variable complex-valued neuron. Furthermore, computer simulations suggest that a single polar variable complex-valued neuron has the following characteristics in the case of using the steepest gradient-descent method with square error: (1) unidentifiable parameters (singular points) degrade the learning speed and (2) a plateau can occur during learning. When the weight is attracted to the singular point, the learning tends to become stuck. However, computer simulations also show that the steepest gradient-descent method with amplitude-phase error and the complex-valued natural gradient method could reduce the effects of the singular points. The learning dynamics near singular points depends on the error functions and the training algorithms used.

  6. Neuron as an emotion-modulated combinatorial switch, and a model of human and animal learning behavior

    CERN Document Server

    Rvachev, Marat M

    2013-01-01

    This theoretical paper proposes a neuronal circuitry layout and synaptic plasticity principles that allow the (pyramidal) neuron to act as a combinatorial switch, whereby the neuron learns to be more prone to generate spikes given those combinations of firing input neurons for which a previous spiking of the neuron had been followed by positive emotional response; the emotional response, it is posited, is mediated by certain modulatory neurotransmitters or hormones. More generally, a trial-and-error learning paradigm is suggested in which the purpose of emotions is to trigger a mechanism of long-term enhancement or weakening of a neuron's spiking response to the preceding synaptic input firing pattern. Thus, emotions provide a feedback pathway that informs neurons whether their spiking was beneficial or detrimental given the combination of inputs. The neuron's ability to discern specific combinations of firing input neurons is achieved through random or predetermined spatial distribution of input synapses on ...

  7. Scalable Linear Visual Feature Learning via Online Parallel Nonnegative Matrix Factorization.

    Science.gov (United States)

    Zhao, Xueyi; Li, Xi; Zhang, Zhongfei; Shen, Chunhua; Zhuang, Yueting; Gao, Lixin; Li, Xuelong

    2016-12-01

    Visual feature learning, which aims to construct an effective feature representation for visual data, has a wide range of applications in computer vision. It is often posed as a problem of nonnegative matrix factorization (NMF), which constructs a linear representation for the data. Although NMF is typically parallelized for efficiency, traditional parallelization methods suffer from either an expensive computation or a high runtime memory usage. To alleviate this problem, we propose a parallel NMF method called alternating least square block decomposition (ALSD), which efficiently solves a set of conditionally independent optimization subproblems based on a highly parallelized fine-grained grid-based blockwise matrix decomposition. By assigning each block optimization subproblem to an individual computing node, ALSD can be effectively implemented in a MapReduce-based Hadoop framework. In order to cope with dynamically varying visual data, we further present an incremental version of ALSD, which is able to incrementally update the NMF solution with a low computational cost. Experimental results demonstrate the efficiency and scalability of the proposed methods as well as their applications to image clustering and image retrieval.

  8. Machine Learning Based Online Performance Prediction for Runtime Parallelization and Task Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Li, J; Ma, X; Singh, K; Schulz, M; de Supinski, B R; McKee, S A

    2008-10-09

    With the emerging many-core paradigm, parallel programming must extend beyond its traditional realm of scientific applications. Converting existing sequential applications as well as developing next-generation software requires assistance from hardware, compilers and runtime systems to exploit parallelism transparently within applications. These systems must decompose applications into tasks that can be executed in parallel and then schedule those tasks to minimize load imbalance. However, many systems lack a priori knowledge about the execution time of all tasks to perform effective load balancing with low scheduling overhead. In this paper, we approach this fundamental problem using machine learning techniques first to generate performance models for all tasks and then applying those models to perform automatic performance prediction across program executions. We also extend an existing scheduling algorithm to use generated task cost estimates for online task partitioning and scheduling. We implement the above techniques in the pR framework, which transparently parallelizes scripts in the popular R language, and evaluate their performance and overhead with both a real-world application and a large number of synthetic representative test scripts. Our experimental results show that our proposed approach significantly improves task partitioning and scheduling, with maximum improvements of 21.8%, 40.3% and 22.1% and average improvements of 15.9%, 16.9% and 4.2% for LMM (a real R application) and synthetic test cases with independent and dependent tasks, respectively.

  9. Learning Precise Spike Train to Spike Train Transformations in Multilayer Feedforward Neuronal Networks

    OpenAIRE

    2014-01-01

    We derive a synaptic weight update rule for learning temporally precise spike train to spike train transformations in multilayer feedforward networks of spiking neurons. The framework, aimed at seamlessly generalizing error backpropagation to the deterministic spiking neuron setting, is based strictly on spike timing and avoids invoking concepts pertaining to spike rates or probabilistic models of spiking. The derivation is founded on two innovations. First, an error functional is proposed th...

  10. Coordinated activity of ventral tegmental neurons adapts to appetitive and aversive learning.

    Directory of Open Access Journals (Sweden)

    Yunbok Kim

    Full Text Available Our understanding of how value-related information is encoded in the ventral tegmental area (VTA is based mainly on the responses of individual putative dopamine neurons. In contrast to cortical areas, the nature of coordinated interactions between groups of VTA neurons during motivated behavior is largely unknown. These interactions can strongly affect information processing, highlighting the importance of investigating network level activity. We recorded the activity of multiple single units and local field potentials (LFP in the VTA during a task in which rats learned to associate novel stimuli with different outcomes. We found that coordinated activity of VTA units with either putative dopamine or GABA waveforms was influenced differently by rewarding versus aversive outcomes. Specifically, after learning, stimuli paired with a rewarding outcome increased the correlation in activity levels between unit pairs whereas stimuli paired with an aversive outcome decreased the correlation. Paired single unit responses also became more redundant after learning. These response patterns flexibly tracked the reversal of contingencies, suggesting that learning is associated with changing correlations and enhanced functional connectivity between VTA neurons. Analysis of LFP recorded simultaneously with unit activity showed an increase in the power of theta oscillations when stimuli predicted reward but not an aversive outcome. With learning, a higher proportion of putative GABA units were phase locked to the theta oscillations than putative dopamine units. These patterns also adapted when task contingencies were changed. Taken together, these data demonstrate that VTA neurons organize flexibly as functional networks to support appetitive and aversive learning.

  11. Learning causes reorganization of neuronal firing patterns to represent related experiences within a hippocampal schema.

    Science.gov (United States)

    McKenzie, Sam; Robinson, Nick T M; Herrera, Lauren; Churchill, Jordana C; Eichenbaum, Howard

    2013-06-19

    According to schema theory as proposed by Piaget and Bartlett, learning involves the assimilation of new memories into networks of preexisting knowledge, as well as alteration of the original networks to accommodate the new information. Recent evidence has shown that rats form a schema of goal locations and that the hippocampus plays an essential role in adding new memories to the spatial schema. Here we examined the nature of hippocampal contributions to schema updating by monitoring firing patterns of multiple CA1 neurons as rats learned new goal locations in an environment in which there already were multiple goals. Before new learning, many neurons that fired on arrival at one goal location also fired at other goals, whereas ensemble activity patterns also distinguished different goal events, thus constituting a neural representation that linked distinct goals within a spatial schema. During new learning, some neurons began to fire as animals approached the new goals. These were primarily the same neurons that fired at original goals, the activity patterns at new goals were similar to those associated with the original goals, and new learning also produced changes in the preexisting goal-related firing patterns. After learning, activity patterns associated with the new and original goals gradually diverged, such that initial generalization was followed by a prolonged period in which new memories became distinguished within the ensemble representation. These findings support the view that consolidation involves assimilation of new memories into preexisting neural networks that accommodate relationships among new and existing memories.

  12. Learning Precise Spike Train-to-Spike Train Transformations in Multilayer Feedforward Neuronal Networks.

    Science.gov (United States)

    Banerjee, Arunava

    2016-05-01

    We derive a synaptic weight update rule for learning temporally precise spike train-to-spike train transformations in multilayer feedforward networks of spiking neurons. The framework, aimed at seamlessly generalizing error backpropagation to the deterministic spiking neuron setting, is based strictly on spike timing and avoids invoking concepts pertaining to spike rates or probabilistic models of spiking. The derivation is founded on two innovations. First, an error functional is proposed that compares the spike train emitted by the output neuron of the network to the desired spike train by way of their putative impact on a virtual postsynaptic neuron. This formulation sidesteps the need for spike alignment and leads to closed-form solutions for all quantities of interest. Second, virtual assignment of weights to spikes rather than synapses enables a perturbation analysis of individual spike times and synaptic weights of the output, as well as all intermediate neurons in the network, which yields the gradients of the error functional with respect to the said entities. Learning proceeds via a gradient descent mechanism that leverages these quantities. Simulation experiments demonstrate the efficacy of the proposed learning framework. The experiments also highlight asymmetries between synapses on excitatory and inhibitory neurons.

  13. In search of intelligence: evolving a developmental neuron capable of learning

    Science.gov (United States)

    Khan, Gul Muhammad; Miller, Julian Francis

    2014-10-01

    A neuro-inspired multi-chromosomal genotype for a single developmental neuron capable of learning and developing memory is proposed. This genotype is evolved so that the phenotype which changes and develops during an agent's lifetime (while problem-solving) gives the agent the capacity for learning by experience. Seven important processes of signal processing and neural structure development are identified from biology and encoded using Cartesian Genetic Programming. These chromosomes represent the electrical and developmental aspects of dendrites, axonal branches, synapses and the neuron soma. The neural morphology that occurs by running these chromosomes is highly dynamic. The dendritic/axonal branches and synaptic connections form and change in response to situations encountered in the learning task. The approach has been evaluated in the context of maze-solving and the board game of checkers (draughts) demonstrating interesting learning capabilities. The motivation underlying this research is to, ab initio, evolve genotypes that build phenotypes with an ability to learn.

  14. Mlifdect: Android Malware Detection Based on Parallel Machine Learning and Information Fusion

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2017-01-01

    Full Text Available In recent years, Android malware has continued to grow at an alarming rate. More recent malicious apps’ employing highly sophisticated detection avoidance techniques makes the traditional machine learning based malware detection methods far less effective. More specifically, they cannot cope with various types of Android malware and have limitation in detection by utilizing a single classification algorithm. To address this limitation, we propose a novel approach in this paper that leverages parallel machine learning and information fusion techniques for better Android malware detection, which is named Mlifdect. To implement this approach, we first extract eight types of features from static analysis on Android apps and build two kinds of feature sets after feature selection. Then, a parallel machine learning detection model is developed for speeding up the process of classification. Finally, we investigate the probability analysis based and Dempster-Shafer theory based information fusion approaches which can effectively obtain the detection results. To validate our method, other state-of-the-art detection works are selected for comparison with real-world Android apps. The experimental results demonstrate that Mlifdect is capable of achieving higher detection accuracy as well as a remarkable run-time efficiency compared to the existing malware detection solutions.

  15. Expressions of multiple neuronal dynamics during sensorimotor learning in the motor cortex of behaving monkeys.

    Directory of Open Access Journals (Sweden)

    Yael Mandelblat-Cerf

    Full Text Available Previous studies support the notion that sensorimotor learning involves multiple processes. We investigated the neuronal basis of these processes by recording single-unit activity in motor cortex of non-human primates (Macaca fascicularis, during adaptation to force-field perturbations. Perturbed trials (reaching to one direction were practiced along with unperturbed trials (to other directions. The number of perturbed trials relative to the unperturbed ones was either low or high, in two separate practice schedules. Unsurprisingly, practice under high-rate resulted in faster learning with more pronounced generalization, as compared to the low-rate practice. However, generalization and retention of behavioral and neuronal effects following practice in high-rate were less stable; namely, the faster learning was forgotten faster. We examined two subgroups of cells and showed that, during learning, the changes in firing-rate in one subgroup depended on the number of practiced trials, but not on time. In contrast, changes in the second subgroup depended on time and practice; the changes in firing-rate, following the same number of perturbed trials, were larger under high-rate than low-rate learning. After learning, the neuronal changes gradually decayed. In the first subgroup, the decay pace did not depend on the practice rate, whereas in the second subgroup, the decay pace was greater following high-rate practice. This group shows neuronal representation that mirrors the behavioral performance, evolving faster but also decaying faster at learning under high-rate, as compared to low-rate. The results suggest that the stability of a new learned skill and its neuronal representation are affected by the acquisition schedule.

  16. Pathway-specific reorganization of projection neurons in somatosensory cortex during learning.

    Science.gov (United States)

    Chen, Jerry L; Margolis, David J; Stankov, Atanas; Sumanovski, Lazar T; Schneider, Bernard L; Helmchen, Fritjof

    2015-08-01

    In the mammalian brain, sensory cortices exhibit plasticity during task learning, but how this alters information transferred between connected cortical areas remains unknown. We found that divergent subpopulations of cortico-cortical neurons in mouse whisker primary somatosensory cortex (S1) undergo functional changes reflecting learned behavior. We chronically imaged activity of S1 neurons projecting to secondary somatosensory (S2) or primary motor (M1) cortex in mice learning a texture discrimination task. Mice adopted an active whisking strategy that enhanced texture-related whisker kinematics, correlating with task performance. M1-projecting neurons reliably encoded basic kinematics features, and an additional subset of touch-related neurons was recruited that persisted past training. The number of S2-projecting touch neurons remained constant, but improved their discrimination of trial types through reorganization while developing activity patterns capable of discriminating the animal's decision. We propose that learning-related changes in S1 enhance sensory representations in a pathway-specific manner, providing downstream areas with task-relevant information for behavior.

  17. A Parallel and Incremental Approach for Data-Intensive Learning of Bayesian Networks.

    Science.gov (United States)

    Yue, Kun; Fang, Qiyu; Wang, Xiaoling; Li, Jin; Liu, Weiyi

    2015-12-01

    Bayesian network (BN) has been adopted as the underlying model for representing and inferring uncertain knowledge. As the basis of realistic applications centered on probabilistic inferences, learning a BN from data is a critical subject of machine learning, artificial intelligence, and big data paradigms. Currently, it is necessary to extend the classical methods for learning BNs with respect to data-intensive computing or in cloud environments. In this paper, we propose a parallel and incremental approach for data-intensive learning of BNs from massive, distributed, and dynamically changing data by extending the classical scoring and search algorithm and using MapReduce. First, we adopt the minimum description length as the scoring metric and give the two-pass MapReduce-based algorithms for computing the required marginal probabilities and scoring the candidate graphical model from sample data. Then, we give the corresponding strategy for extending the classical hill-climbing algorithm to obtain the optimal structure, as well as that for storing a BN by pairs. Further, in view of the dynamic characteristics of the changing data, we give the concept of influence degree to measure the coincidence of the current BN with new data, and then propose the corresponding two-pass MapReduce-based algorithms for BNs incremental learning. Experimental results show the efficiency, scalability, and effectiveness of our methods.

  18. On the applicability of STDP-based learning mechanisms to spiking neuron network models

    Science.gov (United States)

    Sboev, A.; Vlasov, D.; Serenko, A.; Rybka, R.; Moloshnikov, I.

    2016-11-01

    The ways to creating practically effective method for spiking neuron networks learning, that would be appropriate for implementing in neuromorphic hardware and at the same time based on the biologically plausible plasticity rules, namely, on STDP, are discussed. The influence of the amount of correlation between input and output spike trains on the learnability by different STDP rules is evaluated. A usability of alternative combined learning schemes, involving artificial and spiking neuron models is demonstrated on the iris benchmark task and on the practical task of gender recognition.

  19. A hierarchical model for structure learning based on the physiological characteristics of neurons

    Institute of Scientific and Technical Information of China (English)

    WEI Hui

    2007-01-01

    Almost all applications of Artificial Neural Networks (ANNs) depend mainly on their memory ability.The characteristics of typical ANN models are fixed connections,with evolved weights,globalized representations,and globalized optimizations,all based on a mathematical approach.This makes those models to be deficient in robustness,efficiency of learning,capacity,anti-jamming between training sets,and correlativity of samples,etc.In this paper,we attempt to address these problems by adopting the characteristics of biological neurons in morphology and signal processing.A hierarchical neural network was designed and realized to implement structure learning and representations based on connected structures.The basic characteristics of this model are localized and random connections,field limitations of neuron fan-in and fan-out,dynamic behavior of neurons,and samples represented through different sub-circuits of neurons specialized into different response patterns.At the end of this paper,some important aspects of error correction,capacity,learning efficiency,and soundness of structural representation are analyzed theoretically.This paper has demonstrated the feasibility and advantages of structure learning and representation.This model can serve as a fundamental element of cognitive systems such as perception and associative memory.Key-words structure learning,representation,associative memory,computational neuroscience

  20. Learning alters theta amplitude, theta-gamma coupling and neuronal synchronization in inferotemporal cortex

    Directory of Open Access Journals (Sweden)

    Nicol Alister U

    2011-06-01

    Full Text Available Abstract Background How oscillatory brain rhythms alone, or in combination, influence cortical information processing to support learning has yet to be fully established. Local field potential and multi-unit neuronal activity recordings were made from 64-electrode arrays in the inferotemporal cortex of conscious sheep during and after visual discrimination learning of face or object pairs. A neural network model has been developed to simulate and aid functional interpretation of learning-evoked changes. Results Following learning the amplitude of theta (4-8 Hz, but not gamma (30-70 Hz oscillations was increased, as was the ratio of theta to gamma. Over 75% of electrodes showed significant coupling between theta phase and gamma amplitude (theta-nested gamma. The strength of this coupling was also increased following learning and this was not simply a consequence of increased theta amplitude. Actual discrimination performance was significantly correlated with theta and theta-gamma coupling changes. Neuronal activity was phase-locked with theta but learning had no effect on firing rates or the magnitude or latencies of visual evoked potentials during stimuli. The neural network model developed showed that a combination of fast and slow inhibitory interneurons could generate theta-nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity in the model similar changes were produced as in inferotemporal cortex after learning. The model showed that these changes could potentiate the firing of downstream neurons by a temporal desynchronization of excitatory neuron output without increasing the firing frequencies of the latter. This desynchronization effect was confirmed in IT neuronal activity following learning and its magnitude was correlated with discrimination performance. Conclusions Face discrimination learning produces significant increases in both theta amplitude and the strength of theta-gamma coupling in the inferotemporal cortex

  1. Rhythmic Oscillations of Excitatory Bursting Hodkin-Huxley Neuronal Network with Synaptic Learning.

    Science.gov (United States)

    Shi, Qi; Han, Fang; Wang, Zhijie; Li, Caiyun

    2016-01-01

    Rhythmic oscillations of neuronal network are actually kind of synchronous behaviors, which play an important role in neural systems. In this paper, the properties of excitement degree and oscillation frequency of excitatory bursting Hodkin-Huxley neuronal network which incorporates a synaptic learning rule are studied. The effects of coupling strength, synaptic learning rate, and other parameters of chemical synapses, such as synaptic delay and decay time constant, are explored, respectively. It is found that the increase of the coupling strength can weaken the extent of excitement, whereas increasing the synaptic learning rate makes the network more excited in a certain range; along with the increasing of the delay time and the decay time constant, the excitement degree increases at the beginning, then decreases, and keeps stable. It is also found that, along with the increase of the synaptic learning rate, the coupling strength, the delay time, and the decay time constant, the oscillation frequency of the network decreases monotonically.

  2. CREB Selectively Controls Learning-Induced Structural Remodeling of Neurons

    Science.gov (United States)

    Middei, Silvia; Spalloni, Alida; Longone, Patrizia; Pittenger, Christopher; O'Mara, Shane M.; Marie, Helene; Ammassari-Teule, Martine

    2012-01-01

    The modulation of synaptic strength associated with learning is post-synaptically regulated by changes in density and shape of dendritic spines. The transcription factor CREB (cAMP response element binding protein) is required for memory formation and in vitro dendritic spine rearrangements, but its role in learning-induced remodeling of neurons…

  3. CREB Selectively Controls Learning-Induced Structural Remodeling of Neurons

    Science.gov (United States)

    Middei, Silvia; Spalloni, Alida; Longone, Patrizia; Pittenger, Christopher; O'Mara, Shane M.; Marie, Helene; Ammassari-Teule, Martine

    2012-01-01

    The modulation of synaptic strength associated with learning is post-synaptically regulated by changes in density and shape of dendritic spines. The transcription factor CREB (cAMP response element binding protein) is required for memory formation and in vitro dendritic spine rearrangements, but its role in learning-induced remodeling of neurons…

  4. Synaptic potentiation onto habenula neurons in the learned helplessness model of depression

    Energy Technology Data Exchange (ETDEWEB)

    Li, B.; Schulz, D.; Li, B; Piriz, J.; Mirrione, M.; Chung, C.H.; Proulx, C.D.; Schulz, D.; Henn, F.; Malinow, R.

    2011-02-24

    The cellular basis of depressive disorders is poorly understood. Recent studies in monkeys indicate that neurons in the lateral habenula (LHb), a nucleus that mediates communication between forebrain and midbrain structures, can increase their activity when an animal fails to receive an expected positive reward or receives a stimulus that predicts aversive conditions (that is, disappointment or anticipation of a negative outcome). LHb neurons project to, and modulate, dopamine-rich regions, such as the ventral tegmental area (VTA), that control reward-seeking behaviour and participate in depressive disorders. Here we show that in two learned helplessness models of depression, excitatory synapses onto LHb neurons projecting to the VTA are potentiated. Synaptic potentiation correlates with an animal's helplessness behaviour and is due to an enhanced presynaptic release probability. Depleting transmitter release by repeated electrical stimulation of LHb afferents, using a protocol that can be effective for patients who are depressed, markedly suppresses synaptic drive onto VTA-projecting LHb neurons in brain slices and can significantly reduce learned helplessness behaviour in rats. Our results indicate that increased presynaptic action onto LHb neurons contributes to the rodent learned helplessness model of depression.

  5. Machine Learning and Parallelism in the Reconstruction of LHCb and its Upgrade

    Science.gov (United States)

    De Cian, Michel

    2016-11-01

    The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on reconstructing decays of c- and b-hadrons. For Run II of the LHC, a new trigger strategy with a real-time reconstruction, alignment and calibration was employed. This was made possible by implementing an offline-like track reconstruction in the high level trigger. However, the ever increasing need for a higher throughput and the move to parallelism in the CPU architectures in the last years necessitated the use of vectorization techniques to achieve the desired speed and a more extensive use of machine learning to veto bad events early on. This document discusses selected improvements in computationally expensive parts of the track reconstruction, like the Kalman filter, as well as an improved approach to get rid of fake tracks using fast machine learning techniques. In the last part, a short overview of the track reconstruction challenges for the upgrade of LHCb, is given. Running a fully software-based trigger, a large gain in speed in the reconstruction has to be achieved to cope with the 40 MHz bunch-crossing rate. Two possible approaches for techniques exploiting massive parallelization are discussed.

  6. A learning-enabled neuron array IC based upon transistor channel models of biological phenomena.

    Science.gov (United States)

    Brink, S; Nease, S; Hasler, P; Ramakrishnan, S; Wunderlich, R; Basu, A; Degnan, B

    2013-02-01

    We present a single-chip array of 100 biologically-based electronic neuron models interconnected to each other and the outside environment through 30,000 synapses. The chip was fabricated in a standard 350 nm CMOS IC process. Our approach used dense circuit models of synaptic behavior, including biological computation and learning, as well as transistor channel models. We use Address-Event Representation (AER) spike communication for inputs and outputs to this IC. We present the IC architecture and infrastructure, including IC chip, configuration tools, and testing platform. We present measurement of small network of neurons, measurement of STDP neuron dynamics, and measurement from a compiled spiking neuron WTA topology, all compiled into this IC.

  7. Histone Deacetylase (HDAC) Inhibitors - emerging roles in neuronal memory, learning, synaptic plasticity and neural regeneration.

    Science.gov (United States)

    Ganai, Shabir Ahmad; Ramadoss, Mahalakshmi; Mahadevan, Vijayalakshmi

    2016-01-01

    Epigenetic regulation of neuronal signalling through histone acetylation dictates transcription programs that govern neuronal memory, plasticity and learning paradigms. Histone Acetyl Transferases (HATs) and Histone Deacetylases (HDACs) are antagonistic enzymes that regulate gene expression through acetylation and deacetylation of histone proteins around which DNA is wrapped inside a eukaryotic cell nucleus. The epigenetic control of HDACs and the cellular imbalance between HATs and HDACs dictate disease states and have been implicated in muscular dystrophy, loss of memory, neurodegeneration and autistic disorders. Altering gene expression profiles through inhibition of HDACs is now emerging as a powerful technique in therapy. This review presents evolving applications of HDAC inhibitors as potential drugs in neurological research and therapy. Mechanisms that govern their expression profiles in neuronal signalling, plasticity and learning will be covered. Promising and exciting possibilities of HDAC inhibitors in memory formation, fear conditioning, ischemic stroke and neural regeneration have been detailed.

  8. Ventral tegmental area neurons in learned appetitive behavior and positive reinforcement.

    Science.gov (United States)

    Fields, Howard L; Hjelmstad, Gregory O; Margolis, Elyssa B; Nicola, Saleem M

    2007-01-01

    Ventral tegmental area (VTA) neuron firing precedes behaviors elicited by reward-predictive sensory cues and scales with the magnitude and unpredictability of received rewards. These patterns are consistent with roles in the performance of learned appetitive behaviors and in positive reinforcement, respectively. The VTA includes subpopulations of neurons with different afferent connections, neurotransmitter content, and projection targets. Because the VTA and substantia nigra pars compacta are the sole sources of striatal and limbic forebrain dopamine, measurements of dopamine release and manipulations of dopamine function have provided critical evidence supporting a VTA contribution to these functions. However, the VTA also sends GABAergic and glutamatergic projections to the nucleus accumbens and prefrontal cortex. Furthermore, VTA-mediated but dopamine-independent positive reinforcement has been demonstrated. Consequently, identifying the neurotransmitter content and projection target of VTA neurons recorded in vivo will be critical for determining their contribution to learned appetitive behaviors.

  9. Circuit Design of On-Chip BP Learning Neural Network with Programmable Neuron Characteristics

    Institute of Scientific and Technical Information of China (English)

    卢纯; 石秉学; 陈卢

    2000-01-01

    A circuit system of on chip BP(Back-Propagation) learning neural network with pro grammable neurons has been designed,which comprises a feedforward network,an error backpropagation network and a weight updating circuit. It has the merits of simplicity,programmability, speedness,low power-consumption and high density. A novel neuron circuit with pro grammable parameters has been proposed. It generates not only the sigmoidal function but also its derivative. HSPICE simulations are done to a neuron circuit with level 47 transistor models as a standard 1.2tμm CMOS process. The results show that both functions are matched with their respec ive ideal functions very well. The non-linear partition problem is used to verify the operation of the network. The simulation result shows the superior performance of this BP neural network with on-chip learning.

  10. Assigning Function to Adult-Born Neurons: A Theoretical Framework for Characterizing Neural Manipulation of Learning

    Directory of Open Access Journals (Sweden)

    Sarah eHersman

    2016-01-01

    Full Text Available Neuroscientists are concerned with neural processes or computations, but these may not be directly observable. In the field of learning, a behavioral procedure is observed to lead to performance outcomes, but differing inferences on underlying internal processes can lead to difficulties in interpreting conflicting results. An example of this challenge is how many functions have been attributed to adult-born granule cells in the dentate gyrus. Some of these functions were suggested by computational models of the properties of these neurons, while others were hypothesized after manipulations of adult-born neurons resulted in changes to behavioral metrics. This review seeks to provide a framework, based in learning theory classification of behavioral procedures, of the processes that may be underlying behavioral results after manipulating procedure and observing performance. We propose that this framework can serve to clarify experimental findings on adult-born neurons as well as other classes of neural manipulations and their effects on behavior.

  11. DL-ReSuMe: A Delay Learning-Based Remote Supervised Method for Spiking Neurons.

    Science.gov (United States)

    Taherkhani, Aboozar; Belatreche, Ammar; Li, Yuhua; Maguire, Liam P

    2015-12-01

    Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe.

  12. Context-dependent olfactory learning monitored by activities of salivary neurons in cockroaches.

    Science.gov (United States)

    Matsumoto, Chihiro Sato; Matsumoto, Yukihisa; Watanabe, Hidehiro; Nishino, Hiroshi; Mizunami, Makoto

    2012-01-01

    Context-dependent discrimination learning, a sophisticated form of nonelemental associative learning, has been found in many animals, including insects. The major purpose of this research is to establish a method for monitoring this form of nonelemental learning in rigidly restrained insects for investigation of underlying neural mechanisms. We report context-dependent olfactory learning (occasion-setting problem solving) of salivation, which can be monitored as activity changes of salivary neurons in immobilized cockroaches, Periplaneta americana. A group of cockroaches was trained to associate peppermint odor (conditioned stimulus, CS) with sucrose solution reward (unconditioned stimulus, US) while vanilla odor was presented alone without pairing with the US under a flickering light condition (1.0 Hz) and also trained to associate vanilla odor with sucrose reward while peppermint odor was presented alone under a steady light condition. After training, the responses of salivary neurons to the rewarded peppermint odor were significantly greater than those to the unrewarded vanilla odor under steady illumination and those to the rewarded vanilla odor was significantly greater than those to the unrewarded peppermint odor in the presence of flickering light. Similar context-dependent responses were observed in another group of cockroaches trained with the opposite stimulus arrangement. This study demonstrates context-dependent olfactory learning of salivation for the first time in any vertebrate and invertebrate species, which can be monitored by activity changes of salivary neurons in restrained cockroaches. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Developmental Changes in Hippocampal CA1 Single Neuron Firing and Theta Activity during Associative Learning

    Science.gov (United States)

    Kim, Jangjin; Goldsberry, Mary E.; Harmon, Thomas C.; Freeman, John H.

    2016-01-01

    Hippocampal development is thought to play a crucial role in the emergence of many forms of learning and memory, but ontogenetic changes in hippocampal activity during learning have not been examined thoroughly. We examined the ontogeny of hippocampal function by recording theta and single neuron activity from the dorsal hippocampal CA1 area while rat pups were trained in associative learning. Three different age groups [postnatal days (P)17-19, P21-23, and P24-26] were trained over six sessions using a tone conditioned stimulus (CS) and a periorbital stimulation unconditioned stimulus (US). Learning increased as a function of age, with the P21-23 and P24-26 groups learning faster than the P17-19 group. Age- and learning-related changes in both theta and single neuron activity were observed. CA1 pyramidal cells in the older age groups showed greater task-related activity than the P17-19 group during CS-US paired sessions. The proportion of trials with a significant theta (4–10 Hz) power change, the theta/delta ratio, and theta peak frequency also increased in an age-dependent manner. Finally, spike/theta phase-locking during the CS showed an age-related increase. The findings indicate substantial developmental changes in dorsal hippocampal function that may play a role in the ontogeny of learning and memory. PMID:27764172

  14. Biologically Predisposed Learning and Selective Associations in Amygdalar Neurons

    Science.gov (United States)

    Chung, Ain; Barot, Sabiha K.; Kim, Jeansok J.; Bernstein, Ilene L.

    2011-01-01

    Modern views on learning and memory accept the notion of biological constraints--that the formation of association is not uniform across all stimuli. Yet cellular evidence of the encoding of selective associations is lacking. Here, conditioned stimuli (CSs) and unconditioned stimuli (USs) commonly employed in two basic associative learning…

  15. Biologically Predisposed Learning and Selective Associations in Amygdalar Neurons

    Science.gov (United States)

    Chung, Ain; Barot, Sabiha K.; Kim, Jeansok J.; Bernstein, Ilene L.

    2011-01-01

    Modern views on learning and memory accept the notion of biological constraints--that the formation of association is not uniform across all stimuli. Yet cellular evidence of the encoding of selective associations is lacking. Here, conditioned stimuli (CSs) and unconditioned stimuli (USs) commonly employed in two basic associative learning…

  16. Machine learning and parallelism in the reconstruction of LHCb and its upgrade

    CERN Document Server

    Stahl, Marian

    2017-01-01

    After a highly successful first data taking period at the LHC, the LHCb experiment developed a new trigger strategy with a real-time reconstruction, alignment and calibration for Run II. This strategy relies on offline-like track reconstruction in the high level trigger, making a separate offline event reconstruction unnecessary. To enable such reconstruction, and additionally keeping up with a higher event rate due to the accelerator upgrade, the time used by the track reconstruction had to be decreased. Timing improvements have in parts been achieved by utilizing parallel computing techniques that will be described in this document by considering two example applications. Despite decreasing computing time, the reconstruction quality in terms of reconstruction efficiency and fake rate could be improved at several places. Two applications of fast machine learning techniques are highlighted, refining track candidate selection at the early stages of the reconstruction.

  17. A Multi-Core Parallelization Strategy for Statistical Significance Testing in Learning Classifier Systems.

    Science.gov (United States)

    Rudd, James; Moore, Jason H; Urbanowicz, Ryan J

    2013-11-01

    Permutation-based statistics for evaluating the significance of class prediction, predictive attributes, and patterns of association have only appeared within the learning classifier system (LCS) literature since 2012. While still not widely utilized by the LCS research community, formal evaluations of test statistic confidence are imperative to large and complex real world applications such as genetic epidemiology where it is standard practice to quantify the likelihood that a seemingly meaningful statistic could have been obtained purely by chance. LCS algorithms are relatively computationally expensive on their own. The compounding requirements for generating permutation-based statistics may be a limiting factor for some researchers interested in applying LCS algorithms to real world problems. Technology has made LCS parallelization strategies more accessible and thus more popular in recent years. In the present study we examine the benefits of externally parallelizing a series of independent LCS runs such that permutation testing with cross validation becomes more feasible to complete on a single multi-core workstation. We test our python implementation of this strategy in the context of a simulated complex genetic epidemiological data mining problem. Our evaluations indicate that as long as the number of concurrent processes does not exceed the number of CPU cores, the speedup achieved is approximately linear.

  18. Learning-related plasticity in PE1 and other mushroom body-extrinsic neurons in the honeybee brain.

    Science.gov (United States)

    Okada, Ryuichi; Rybak, Jürgen; Manz, Gisela; Menzel, Randolf

    2007-10-24

    Extracellular recording were performed from mushroom body-extrinsic neurons while the animal was exposed to differential conditioning to two odors, the forward-paired conditioned stimulus (CS+; the odor that will be or has been paired with sucrose reward) and the unpaired CS- (the odor that will be or has been specifically unpaired with sucrose reward). A single neuron, the pedunculus-extrinsic neuron number 1 (PE1), was identified on the basis of its firing pattern, and other neurons were grouped together as non-PE1 neurons. PE1 reduces its response to CS+ and does not change its response to CS- after learning. Most non-PE1 neurons do not change their responses during learning, but some decrease, and one neuron increases its response to CS+. PE1 receives inhibitory synaptic inputs, and neuroanatomical studies indicate closely attached GABA-immune reactive profiles originating at least partially from neurons of the protocerebral-calycal tract (PCT). Thus, either the associative reduction of odor responses originates within the PE1 via a long-term depression (LTD)-like mechanism, or PE1 receives stronger inhibition for the learned odor from the PCT neurons or from Kenyon cells. In any event, as the decreased firing of PE1 correlates with the increased probability of behavioral responses, our data suggest that the mushroom bodies exert general inhibition over sensory-motor connections, which relaxes selectively for learned stimuli.

  19. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses.

    Science.gov (United States)

    Qiao, Ning; Mostafa, Hesham; Corradi, Federico; Osswald, Marc; Stefanini, Fabio; Sumislawska, Dora; Indiveri, Giacomo

    2015-01-01

    Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.

  20. Machine learning and parallelism in the reconstruction of LHCb and its upgrade

    CERN Document Server

    De Cian, Michel

    2016-01-01

    The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on reconstructing decays of c- and b-hadrons. For Run II of the LHC, a new trigger strategy with a real-time reconstruction, alignment and calibration was employed. This was made possible by implementing an oine-like track reconstruction in the high level trigger. However, the ever increasing need for a higher throughput and the move to parallelism in the CPU architectures in the last years necessitated the use of vectorization techniques to achieve the desired speed and a more extensive use of machine learning to veto bad events early on. This document discusses selected improvements in computationally expensive parts of the track reconstruction, like the Kalman filter, as well as an improved approach to get rid of fake tracks using fast machine learning techniques. In the last part, a short overview of the track reconstruction challenges for the upgrade of LHCb, is given. Running a fully software-based trigger, a l...

  1. Engineering Computer Games: A Parallel Learning Opportunity for Undergraduate Engineering and Primary (K-5 Students

    Directory of Open Access Journals (Sweden)

    Mark Michael Budnik

    2011-04-01

    Full Text Available In this paper, we present how our College of Engineering is developing a growing portfolio of engineering computer games as a parallel learning opportunity for undergraduate engineering and primary (grade K-5 students. Around the world, many schools provide secondary students (grade 6-12 with opportunities to pursue pre-engineering classes. However, by the time students reach this age, many of them have already determined their educational goals and preferred careers. Our College of Engineering is developing resources to provide primary students, still in their educational formative years, with opportunities to learn more about engineering. One of these resources is a library of engineering games targeted to the primary student population. The games are designed by sophomore students in our College of Engineering. During their Introduction to Computational Techniques course, the students use the LabVIEW environment to develop the games. This software provides a wealth of design resources for the novice programmer; using it to develop the games strengthens the undergraduates

  2. A Distributed Algorithm for Parallel Multi-task Allocation Based on Profit Sharing Learning

    Institute of Scientific and Technical Information of China (English)

    SU Zhao-Pin; JIANG Jian-Guo; LIANG Chang-Yong; ZHANG Guo-Fu

    2011-01-01

    Task allocation via coalition formation is a fundanental research challenge in several application domains of multi-agent systems (MAS),such as resource allocation,disaster response management,and so on.It mainly deals with how to allocate many unresolved tasks to groups of agents in a distributed manner.In this paper,we propose a distributed parallel multi-task allocation algorithm among self-organizing and self-learning agents.To tackle the situation,we disperse agents and tanks geographically in two-dimensional cells,and then introduce profit sharing learning (PSL) for a single agent to search its tasks by continual self-learuing.We also present strategies for communication and negotiation among agents to allocate real workload to every tasked agent.Finally,to evaluate the effectiveness of the proposed algorithm,we compare it with Shehory and Kraus' distributed task allocation algorithm which were discussed by many researchers in recent years.Experimental results show that the proposed algorithm can quickly form a solving coalition for every task.Moreover,the proposed algorithm can specifically tell us the real workload of every tasked agent,and thus can provide a specific and significant reference for practical control tasks.

  3. Splicing factors control C. elegans behavioural learning in a single neuron by producing DAF-2c receptor

    Science.gov (United States)

    Tomioka, Masahiro; Naito, Yasuki; Kuroyanagi, Hidehito; Iino, Yuichi

    2016-01-01

    Alternative splicing generates protein diversity essential for neuronal properties. However, the precise mechanisms underlying this process and its relevance to physiological and behavioural functions are poorly understood. To address these issues, we focused on a cassette exon of the Caenorhabditis elegans insulin receptor gene daf-2, whose proper variant expression in the taste receptor neuron ASER is critical for taste-avoidance learning. We show that inclusion of daf-2 exon 11.5 is restricted to specific neuron types, including ASER, and is controlled by a combinatorial action of evolutionarily conserved alternative splicing factors, RBFOX, CELF and PTB families of proteins. Mutations of these factors cause a learning defect, and this defect is relieved by DAF-2c (exon 11.5+) isoform expression only in a single neuron ASER. Our results provide evidence that alternative splicing regulation of a single critical gene in a single critical neuron is essential for learning ability in an organism. PMID:27198602

  4. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.

    Science.gov (United States)

    Burbank, Kendra S

    2015-12-01

    The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.

  5. Aging in Sensory and Motor Neurons Results in Learning Failure in Aplysia californica.

    Directory of Open Access Journals (Sweden)

    Andrew T Kempsell

    Full Text Available The physiological and molecular mechanisms of age-related memory loss are complicated by the complexity of vertebrate nervous systems. This study takes advantage of a simple neural model to investigate nervous system aging, focusing on changes in learning and memory in the form of behavioral sensitization in vivo and synaptic facilitation in vitro. The effect of aging on the tail withdrawal reflex (TWR was studied in Aplysia californica at maturity and late in the annual lifecycle. We found that short-term sensitization in TWR was absent in aged Aplysia. This implied that the neuronal machinery governing nonassociative learning was compromised during aging. Synaptic plasticity in the form of short-term facilitation between tail sensory and motor neurons decreased during aging whether the sensitizing stimulus was tail shock or the heterosynaptic modulator serotonin (5-HT. Together, these results suggest that the cellular mechanisms governing behavioral sensitization are compromised during aging, thereby nearly eliminating sensitization in aged Aplysia.

  6. Neuronal and intestinal protein kinase d isoforms mediate Na+ (salt taste)-induced learning.

    Science.gov (United States)

    Fu, Ya; Ren, Min; Feng, Hui; Chen, Lu; Altun, Zeynep F; Rubin, Charles S

    2009-08-11

    Ubiquitously expressed protein kinase D (PKD) isoforms are poised to disseminate signals carried by diacylglycerol (DAG). However, the in vivo regulation and functions of PKDs are poorly understood. We show that the Caenorhabditis elegans gene, dkf-2, encodes not just DKF-2A, but also a second previously unknown isoform, DKF-2B. Whereas DKF-2A is present mainly in intestine, we show that DKF-2B is found in neurons. Characterization of dkf-2 null mutants and transgenic animals expressing DKF-2B, DKF-2A, or both isoforms revealed that PKDs couple DAG signals to regulation of sodium ion (Na+)-induced learning. EGL-8 (a phospholipase Cbeta4 homolog) and TPA-1 (a protein kinase Cdelta homolog) are upstream regulators of DKF-2 isoforms in vivo. Thus, pathways containing EGL-8-TPA-1-DKF-2 enable learning and behavioral plasticity by receiving, transmitting, and cooperatively integrating environmental signals targeted to both neurons and intestine.

  7. Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications.

    Science.gov (United States)

    D'Angelo, Gianni; Rampone, Salvatore

    2014-01-01

    The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n(3)) and of O(n(5)) order, respectively, and so, the algorithm is unaffordable for huge data sets. We find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of

  8. Diverse Assessment and Active Student Engagement Sustain Deep Learning: A Comparative Study of Outcomes in Two Parallel Introductory Biochemistry Courses

    Science.gov (United States)

    Bevan, Samantha J.; Chan, Cecilia W. L.; Tanner, Julian A.

    2014-01-01

    Although there is increasing evidence for a relationship between courses that emphasize student engagement and achievement of student deep learning, there is a paucity of quantitative comparative studies in a biochemistry and molecular biology context. Here, we present a pedagogical study in two contrasting parallel biochemistry introductory…

  9. Diverse Assessment and Active Student Engagement Sustain Deep Learning: A Comparative Study of Outcomes in Two Parallel Introductory Biochemistry Courses

    Science.gov (United States)

    Bevan, Samantha J.; Chan, Cecilia W. L.; Tanner, Julian A.

    2014-01-01

    Although there is increasing evidence for a relationship between courses that emphasize student engagement and achievement of student deep learning, there is a paucity of quantitative comparative studies in a biochemistry and molecular biology context. Here, we present a pedagogical study in two contrasting parallel biochemistry introductory…

  10. Bidirectional Regulation of Innate and Learned Behaviors That Rely on Frequency Discrimination by Cortical Inhibitory Neurons.

    Directory of Open Access Journals (Sweden)

    Mark Aizenberg

    2015-12-01

    Full Text Available The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination.

  11. Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.

    Directory of Open Access Journals (Sweden)

    George L Chadderdon

    Full Text Available Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1, no learning (0, or punishment (-1, corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.

  12. Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.

    Science.gov (United States)

    Chadderdon, George L; Neymotin, Samuel A; Kerr, Cliff C; Lytton, William W

    2012-01-01

    Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1), no learning (0), or punishment (-1), corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.

  13. Adult-generated hippocampal neurons allow the flexible use of spatially precise learning strategies.

    Directory of Open Access Journals (Sweden)

    Alexander Garthe

    Full Text Available Despite enormous progress in the past few years the specific contribution of newly born granule cells to the function of the adult hippocampus is still not clear. We hypothesized that in order to solve this question particular attention has to be paid to the specific design, the analysis, and the interpretation of the learning test to be used. We thus designed a behavioral experiment along hypotheses derived from a computational model predicting that new neurons might be particularly relevant for learning conditions, in which novel aspects arise in familiar situations, thus putting high demands on the qualitative aspects of (re-learning.In the reference memory version of the water maze task suppression of adult neurogenesis with temozolomide (TMZ caused a highly specific learning deficit. Mice were tested in the hidden platform version of the Morris water maze (6 trials per day for 5 days with a reversal of the platform location on day 4. Testing was done at 4 weeks after the end of four cycles of treatment to minimize the number of potentially recruitable new neurons at the time of testing. The reduction of neurogenesis did not alter longterm potentiation in CA3 and the dentate gyrus but abolished the part of dentate gyrus LTP that is attributed to the new neurons. TMZ did not have any overt side effects at the time of testing, and both treated mice and controls learned to find the hidden platform. Qualitative analysis of search strategies, however, revealed that treated mice did not advance to spatially precise search strategies, in particular when learning a changed goal position (reversal. New neurons in the dentate gyrus thus seem to be necessary for adding flexibility to some hippocampus-dependent qualitative parameters of learning.Our finding that a lack of adult-generated granule cells specifically results in the animal's inability to precisely locate a hidden goal is also in accordance with a specialized role of the dentate gyrus in

  14. Dopamine receptor-mediated mechanisms involved in the expression of learned activity of primate striatal neurons.

    Science.gov (United States)

    Watanabe, K; Kimura, M

    1998-05-01

    To understand the mechanisms by which basal ganglia neurons express acquired activities during and after behavioral learning, selective dopamine (DA) receptor antagonists were applied while recording the activity of striatal neurons in monkeys performing behavioral tasks. In experiment 1, a monkey was trained to associate a click sound with a drop of reward water. DA receptor antagonists were administered by micropressure using a stainless steel injection cannula (300 microm ID) through which a Teflon-coated tungsten wire for recording neuronal activity had been threaded. Responses to sound by tonically active neurons (TANs), a class of neurons in the primate striatum, were recorded through a tungsten wire electrode during the application of either D1- or D2-class DA receptor antagonists (total volume one of the surrounding barrels. SCH23390 (10 mM, pH 4.5) and (-)-sulpiride (10 mM, pH 4.5) were used. The effects of iontophoresis of both D1- and D2-class antagonists were examined in 40 TANs. Of 40 TANs from which recordings were made, responses were suppressed exclusively by the D2-class antagonist in 19 TANs, exclusively by the D1-class antagonist in 3 TANs, and by both D1- and D2-class antagonists in 7 TANs. When 0.9% NaCl, saline, was applied by pressure (<1 microl) or by iontophoresis (<30 nA) as a control, neither the background discharge rates nor the responses of TANs were significantly influenced. Background discharge rate of TANs was also not affected by D1- or D2-class antagonists applied by either micropressure injection or iontophoresis. It was concluded that the nigrostriatal DA system enables TANs to express learned activity primarily through D2-class and partly through D1-class receptor-mediated mechanisms in the striatum.

  15. Using a multi-port architecture of neural-net associative memory based on the equivalency paradigm for parallel cluster image analysis and self-learning

    Science.gov (United States)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Grabovlyak, Sveta K.; Nikitovich, Diana V.

    2013-01-01

    We consider equivalency models, including matrix-matrix and matrix-tensor and with the dual adaptive-weighted correlation, multi-port neural-net auto-associative and hetero-associative memory (MP NN AAM and HAP), which are equivalency paradigm and the theoretical basis of our work. We make a brief overview of the possible implementations of the MP NN AAM and of their architectures proposed and investigated earlier by us. The main base unit of such architectures is a matrix-matrix or matrix-tensor equivalentor. We show that the MP NN AAM based on the equivalency paradigm and optoelectronic architectures with space-time integration and parallel-serial 2D images processing have advantages such as increased memory capacity (more than ten times of the number of neurons!), high performance in different modes (1010 - 1012 connections per second!) And the ability to process, store and associatively recognize highly correlated images. Next, we show that with minor modifications, such MP NN AAM can be successfully used for highperformance parallel clustering processing of images. We show simulation results of using these modifications for clustering and learning models and algorithms for cluster analysis of specific images and divide them into categories of the array. Show example of a cluster division of 32 images (40x32 pixels) letters and graphics for 12 clusters with simultaneous formation of the output-weighted space allocated images for each cluster. We discuss algorithms for learning and self-learning in such structures and their comparative evaluations based on Mathcad simulations are made. It is shown that, unlike the traditional Kohonen self-organizing maps, time of learning in the proposed structures of multi-port neuronet classifier/clusterizer (MP NN C) on the basis of equivalency paradigm, due to their multi-port, decreases by orders and can be, in some cases, just a few epochs. Estimates show that in the test clustering of 32 1280- element images into 12

  16. Reinforcement learning using a continuous time actor-critic framework with spiking neurons.

    Science.gov (United States)

    Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram

    2013-04-01

    Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.

  17. Reinforcement learning using a continuous time actor-critic framework with spiking neurons.

    Directory of Open Access Journals (Sweden)

    Nicolas Frémaux

    2013-04-01

    Full Text Available Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD learning of Doya (2000 to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.

  18. Gait simulation via a 6-DOF parallel robot with iterative learning control.

    Science.gov (United States)

    Aubin, Patrick M; Cowley, Matthew S; Ledoux, William R

    2008-03-01

    We have developed a robotic gait simulator (RGS) by leveraging a 6-degree of freedom parallel robot, with the goal of overcoming three significant challenges of gait simulation, including: 1) operating at near physiologically correct velocities; 2) inputting full scale ground reaction forces; and 3) simulating motion in all three planes (sagittal, coronal and transverse). The robot will eventually be employed with cadaveric specimens, but as a means of exploring the capability of the system, we have first used it with a prosthetic foot. Gait data were recorded from one transtibial amputee using a motion analysis system and force plate. Using the same prosthetic foot as the subject, the RGS accurately reproduced the recorded kinematics and kinetics and the appropriate vertical ground reaction force was realized with a proportional iterative learning controller. After six gait iterations the controller reduced the root mean square (RMS) error between the simulated and in situ; vertical ground reaction force to 35 N during a 1.5 s simulation of the stance phase of gait with a prosthetic foot. This paper addresses the design, methodology and validation of the novel RGS.

  19. Effect of cholecystokinin on learning and memory, neuronal proliferation and apoptosis in the rat hippocampus

    Science.gov (United States)

    Reisi, Parham; Ghaedamini, Ali Reza; Golbidi, Mohammad; Shabrang, Moloud; Arabpoor, Zohreh; Rashidi, Bahman

    2015-01-01

    Background: Cholecystokinin (CCK) has roles in learning and memory, but the cellular mechanism is poorly understood. This study investigated the effect of CCK on spatial learning and memory, neuronal proliferation and apoptosis in the hippocampus in rats. Materials and Methods: Experimental groups were control and CCK. The rats received CKK octapeptide sulfated (CCK-8S, 1.6 μg/kg, i.p.) for 14 days. Spatial learning and memory were tested by Morris water maze and finally immunohistochemical study was performed; neurogenesis by Ki-67 method and apoptosis by Terminal deoxynucleotidyl transferase mediated dUTP Nick End Labeling (TUNEL) assay in hippocampal dentate gyrus (DG). Results: Cholecystokinin increased Ki-67 positive cells and reduced TUNEL positive cells in the granular layer of hippocampal DG. CCK failed to have a significant effect on spatial learning and memory. Conclusion: Results indicate neuroprotective and proliferative effects of CCK in the hippocampus; however, other factors are probably involved until the newly born neurons achieve necessary integrity for behavioral changes. PMID:26623402

  20. Autism and the mirror neuron system: insights from learning and teaching.

    Science.gov (United States)

    Vivanti, Giacomo; Rogers, Sally J

    2014-01-01

    Individuals with autism have difficulties in social learning domains which typically involve mirror neuron system (MNS) activation. However, the precise role of the MNS in the development of autism and its relevance to treatment remain unclear. In this paper, we argue that three distinct aspects of social learning are critical for advancing knowledge in this area: (i) the mechanisms that allow for the implicit mapping of and learning from others' behaviour, (ii) the motivation to attend to and model conspecifics and (iii) the flexible and selective use of social learning. These factors are key targets of the Early Start Denver Model, an autism treatment approach which emphasizes social imitation, dyadic engagement, verbal and non-verbal communication and affect sharing. Analysis of the developmental processes and treatment-related changes in these different aspects of social learning in autism can shed light on the nature of the neuropsychological mechanisms underlying social learning and positive treatment outcomes in autism. This knowledge in turn may assist in developing more successful pedagogic approaches to autism spectrum disorder. Thus, intervention research can inform the debate on relations among neuropsychology of social learning, the role of the MNS, and educational practice in autism.

  1. Autism and the mirror neuron system: insights from learning and teaching

    Science.gov (United States)

    Vivanti, Giacomo; Rogers, Sally J.

    2014-01-01

    Individuals with autism have difficulties in social learning domains which typically involve mirror neuron system (MNS) activation. However, the precise role of the MNS in the development of autism and its relevance to treatment remain unclear. In this paper, we argue that three distinct aspects of social learning are critical for advancing knowledge in this area: (i) the mechanisms that allow for the implicit mapping of and learning from others' behaviour, (ii) the motivation to attend to and model conspecifics and (iii) the flexible and selective use of social learning. These factors are key targets of the Early Start Denver Model, an autism treatment approach which emphasizes social imitation, dyadic engagement, verbal and non-verbal communication and affect sharing. Analysis of the developmental processes and treatment-related changes in these different aspects of social learning in autism can shed light on the nature of the neuropsychological mechanisms underlying social learning and positive treatment outcomes in autism. This knowledge in turn may assist in developing more successful pedagogic approaches to autism spectrum disorder. Thus, intervention research can inform the debate on relations among neuropsychology of social learning, the role of the MNS, and educational practice in autism. PMID:24778379

  2. Bidirectional coupling between astrocytes and neurons mediates learning and dynamic coordination in the brain: a multiple modeling approach.

    Directory of Open Access Journals (Sweden)

    John J Wade

    Full Text Available In recent years research suggests that astrocyte networks, in addition to nutrient and waste processing functions, regulate both structural and synaptic plasticity. To understand the biological mechanisms that underpin such plasticity requires the development of cell level models that capture the mutual interaction between astrocytes and neurons. This paper presents a detailed model of bidirectional signaling between astrocytes and neurons (the astrocyte-neuron model or AN model which yields new insights into the computational role of astrocyte-neuronal coupling. From a set of modeling studies we demonstrate two significant findings. Firstly, that spatial signaling via astrocytes can relay a "learning signal" to remote synaptic sites. Results show that slow inward currents cause synchronized postsynaptic activity in remote neurons and subsequently allow Spike-Timing-Dependent Plasticity based learning to occur at the associated synapses. Secondly, that bidirectional communication between neurons and astrocytes underpins dynamic coordination between neuron clusters. Although our composite AN model is presently applied to simplified neural structures and limited to coordination between localized neurons, the principle (which embodies structural, functional and dynamic complexity, and the modeling strategy may be extended to coordination among remote neuron clusters.

  3. Adaptive, fast walking in a biped robot under neuronal control and learning.

    Directory of Open Access Journals (Sweden)

    Poramate Manoonpong

    2007-07-01

    Full Text Available Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical arises only pointwise, as needed. This requires an architecture of several nested, sensori-motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (>3.0 leg length/s, self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks.

  4. Possible Signaling Pathways Mediating Neuronal Calcium Sensor-1-Dependent Spatial Learning and Memory in Mice

    Science.gov (United States)

    Nakamura, Tomoe Y.; Nakao, Shu; Nakajo, Yukako; Takahashi, Jun C.; Wakabayashi, Shigeo; Yanamoto, Hiroji

    2017-01-01

    Intracellular Ca2+ signaling regulates diverse functions of the nervous system. Many of these neuronal functions, including learning and memory, are regulated by neuronal calcium sensor-1 (NCS-1). However, the pathways by which NCS-1 regulates these functions remain poorly understood. Consistent with the findings of previous reports, we revealed that NCS-1 deficient (Ncs1-/-) mice exhibit impaired spatial learning and memory function in the Morris water maze test, although there was little change in their exercise activity, as determined via treadmill-analysis. Expression of brain-derived neurotrophic factor (BDNF; a key regulator of memory function) and dopamine was significantly reduced in the Ncs1-/- mouse brain, without changes in the levels of glial cell-line derived neurotrophic factor or nerve growth factor. Although there were no gross structural abnormalities in the hippocampi of Ncs1-/- mice, electron microscopy analysis revealed that the density of large dense core vesicles in CA1 presynaptic neurons, which release BDNF and dopamine, was decreased. Phosphorylation of Ca2+/calmodulin-dependent protein kinase II-α (CaMKII-α, which is known to trigger long-term potentiation and increase BDNF levels, was significantly reduced in the Ncs1-/- mouse brain. Furthermore, high voltage electric potential stimulation, which increases the levels of BDNF and promotes spatial learning, significantly increased the levels of NCS-1 concomitant with phosphorylated CaMKII-α in the hippocampus; suggesting a close relationship between NCS-1 and CaMKII-α. Our findings indicate that NCS-1 may regulate spatial learning and memory function at least in part through activation of CaMKII-α signaling, which may directly or indirectly increase BDNF production. PMID:28122057

  5. Transcranial direct current stimulation modulates neuronal activity and learning in pilot training

    Directory of Open Access Journals (Sweden)

    Jaehoon eChoe

    2016-02-01

    Full Text Available Skill acquisition requires distributed learning both within (online and across (offline days to consolidate experiences into newly learned abilities. In particular, piloting an aircraft requires skills developed from extensive training and practice. Here, we tested the hypothesis that transcranial direct current stimulation (tDCS can modulate neuronal function to improve skill learning and performance during flight simulator training of aircraft landing procedures. Thirty-two right-handed participants consented to participate in four consecutive daily sessions of flight simulation training and received sham or anodal high-definition-tDCS to the right dorsolateral prefrontal cortex (DLPFC or left motor cortex (M1 in a randomized, double-blind experiment. Continuous electroencephalography (EEG and functional near infrared spectroscopy (fNIRS were collected during flight simulation, n-back working memory, and resting-state assessments. tDCS of the right DLPFC increased midline-frontal theta-band activity in flight and n-back working memory training, confirming tDCS-related modulation of brain processes involved in executive function. This modulation corresponded to a significantly different online and offline learning rates for working memory accuracy and decreased inter-subject behavioral variability in flight and n-back tasks in the DLPFC stimulation group. Additionally, tDCS of left M1 increased parietal alpha power during flight tasks and tDCS to the right DLPFC increased midline frontal theta-band power during n-back and flight tasks. These results demonstrate a modulation of group variance in skill acquisition through an increasing in learned skill consistency in cognitive and real-world tasks with tDCS. Further, tDCS performance improvements corresponded to changes in electrophysiological and blood-oxygenation activity of the DLPFC and motor cortices, providing a stronger link between modulated neuronal function and behavior.

  6. Chlorogenic acid protection of neuronal nitric oxide synthase-positive neurons in the hippocampus of mice with impaired learning and memory

    Institute of Scientific and Technical Information of China (English)

    Qiuyun Tu; Xiangqi Tang; Zhiping Hu

    2008-01-01

    BACKGROUND: Clinical practice and modern pharmacology have confirmed that ehlorogenic acid can ameliorate learning and memory impairments. OBJECTIVE: To observe the effects of chlorogenic acid on neuronal nitric oxide synthase (nNOS)-positive neurons in the mouse hippocampus, and to investigate the mechanisms underlying the beneficial effects of chlorogenic acid on learning and memory. DESIGN, TIME AND SETTING: The present randomized, controlled, neural cell morphological observation was performed at the Institute of Neurobiology, Central South University between January and May 2005.MATERIALS: Forty-eight female, healthy, adult, Kunming mice were included in this study. Learning and memory impairment was induced with an injection of 0.5 μL kainic acid (0.4 mg/mL) into the hippocampus.METHODS: The mice were randomized into three groups (n = 16): model, control, and chlorogenic acid-treated. At 2 days following learning and memory impairment induction, intragastric administration of physiological saline or chlorogenic acid was performed in the model and chlorogenic acid-treated groups, respectively. The control mice were administered 0.5 μ L physiological saline into the hippocampus, and 2 days later, they received an intragastric administration of physiological saline. Each mouse received two intragastric administrations (1 mL solution once) per day, for a total of 35 days. MAIN OUTCOME MEASURES: Detection of changes in hippocampal and cerebral cortical nNOS neurons by immunohistochemistry; determination of spatial learning and memory utilizing the Y-maze device.RESULTS: At day 7 and 35 after intervention, there was no significant difference in the number of nNOS-positive neurons in the cerebral cortex between the model, chlorogenic acid, and control groups (P > 0.05). Compared with the control group, the number of nNOS-positive neurons in the hippocampal CA1-4 region was significantly less in the model group (P 0.05). At day 7 following intervention, the number

  7. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    Science.gov (United States)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  8. A comparison of parallelism in interface designs for computer-based learning environments

    NARCIS (Netherlands)

    Min, Rik; Yu, Tao; Spenkelink, Gerd; Vos, Hans

    2004-01-01

    In this paper we discuss an experiment that was carried out with a prototype, designed in conformity with the concept of parallelism and the Parallel Instruction theory (the PI theory). We designed this prototype with five different interfaces, and ran an empirical study in which 18 participants com

  9. TGF-β Signaling in Dopaminergic Neurons Regulates Dendritic Growth, Excitatory-Inhibitory Synaptic Balance, and Reversal Learning

    Directory of Open Access Journals (Sweden)

    Sarah X. Luo

    2016-12-01

    Full Text Available Neural circuits involving midbrain dopaminergic (DA neurons regulate reward and goal-directed behaviors. Although local GABAergic input is known to modulate DA circuits, the mechanism that controls excitatory/inhibitory synaptic balance in DA neurons remains unclear. Here, we show that DA neurons use autocrine transforming growth factor β (TGF-β signaling to promote the growth of axons and dendrites. Surprisingly, removing TGF-β type II receptor in DA neurons also disrupts the balance in TGF-β1 expression in DA neurons and neighboring GABAergic neurons, which increases inhibitory input, reduces excitatory synaptic input, and alters phasic firing patterns in DA neurons. Mice lacking TGF-β signaling in DA neurons are hyperactive and exhibit inflexibility in relinquishing learned behaviors and re-establishing new stimulus-reward associations. These results support a role for TGF-β in regulating the delicate balance of excitatory/inhibitory synaptic input in local microcircuits involving DA and GABAergic neurons and its potential contributions to neuropsychiatric disorders.

  10. QSpike Tools: a Generic Framework for Parallel Batch Preprocessing of Extracellular Neuronal Signals Recorded by Substrate Microelectrode Arrays

    Directory of Open Access Journals (Sweden)

    Mufti eMahmud

    2014-03-01

    Full Text Available Micro-Electrode Arrays (MEAs have emerged as a mature technique to investigate brain (dysfunctions in vivo and in in vitro animal models. Often referred to as smart Petri dishes, MEAs has demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are often employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20kHz sampling rate: ~8GB/MEA,h uncompressed. Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc., are decomposed and batch-queued to a multi-core architecture or to computer cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and possibly inspire the creation of community-supported cloud-computing facilities for MEAs users.

  11. QSpike tools: a generic framework for parallel batch preprocessing of extracellular neuronal signals recorded by substrate microelectrode arrays.

    Science.gov (United States)

    Mahmud, Mufti; Pulizzi, Rocco; Vasilaki, Eleni; Giugliano, Michele

    2014-01-01

    Micro-Electrode Arrays (MEAs) have emerged as a mature technique to investigate brain (dys)functions in vivo and in in vitro animal models. Often referred to as "smart" Petri dishes, MEAs have demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20 kHz sampling rate: approximately 8 GB/MEA,h uncompressed). Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc.), are decomposed and batch-queued to a multi-core architecture or to a computers cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and inspire the creation of community-supported cloud-computing facilities for MEAs users.

  12. Whole-brain mapping of neuronal activity in the learned helplessness model of depression

    Directory of Open Access Journals (Sweden)

    Yongsoo eKim

    2016-02-01

    Full Text Available Some individuals are resilient, whereas others succumb to despair in repeated stressful situations. The neurobiological mechanisms underlying such divergent behavioral responses remain unclear. Here, we employed an automated method for mapping neuronal activity in search of signatures of stress responses in the entire mouse brain. We used serial two-photon tomography to detect expression of c-FosGFP – a marker of neuronal activation – in c-fosGFP transgenic mice subjected to the learned helplessness (LH procedure, a widely used model of stress-induced depression-like phenotype in laboratory animals. We found that mice showing helpless behavior had an overall brain-wide reduction in the level of neuronal activation compared with mice showing resilient behavior, with the exception of a few brain areas, including the locus coeruleus, that were more activated in the helpless mice. In addition, the helpless mice showed a strong trend of having higher similarity in whole brain activity profile among individuals, suggesting that helplessness is represented by a more stereotypic brain-wide activation pattern. This latter effect was confirmed in rats subjected to the LH procedure, using 2-deoxy-2[18F]fluoro-D-glucose positron emission tomography to assess neural activity. Our findings reveal distinct brain activity markings that correlate with adaptive and maladaptive behavioral responses to stress, and provide a framework for further studies investigating the contribution of specific brain regions to maladaptive stress responses.

  13. NEURONAS ESPEJO Y EL APRENDIZAJE EN ANESTESIA Learning anaesthesia and mirror neurons

    Directory of Open Access Journals (Sweden)

    John Bautista

    2011-12-01

    Full Text Available Las neuronas espejo fueron descritas inicialmente en primates de la especie Macaca nemestrina hacia el año 1990 por el neurofisiólogo Giacomo Rizzolatti y su grupo de la Universidad de Parma, en Italia. Son neuronas motoras que activan cuando el individuo observa la acción concreta para la que están predeterminadas sin generar ningún tipo de actividad motora. En la actualidad se considera que estas neuronas participan en procesos de adaptación al entorno social ya que permiten no solamente comprender las acciones sino también las intenciones de otros individuos. Se les atribuye función en los procesos de aprendizaje simple a través de la observación y la imitación que pueden ser aprovechados en la enseñanza de la anestesiología.Mirror neurons were initially described in primates from the Macaca nemestrina species around 1990 by the neurophysiologist Giacomo Rizzolatti and his group from Parma University in Italy. They are motor neurons which become activated when an individual observes a concrete action for which they are predetermined without any type of motor activity being produced. It is currently considered that these neurons participate in adapting to the social setting since they lead to understanding other individuals' actions and intensions. A function has been ascribed to them regarding simple learning through observation and imitation which can be exploited in teaching anesthesiology.

  14. Increased entorhinal-prefrontal theta synchronization parallels decreased entorhinal-hippocampal theta synchronization during learning and consolidation of associative memory.

    Directory of Open Access Journals (Sweden)

    Kaori eTakehara-Nishiuchi

    2012-01-01

    Full Text Available Memories are thought to be encoded as a distributed representation in the neocortex. The medial prefrontal cortex (mPFC has been shown to support the expression of memories that initially depend on the hippocampus (HPC, yet the mechanisms by which the HPC and mPFC access the distributed representations in the neocortex are unknown. By measuring phase synchronization of local field potential (LFP oscillations, we found that learning initiated changes in neuronal communication of the HPC and mPFC with the lateral entorhinal cortex (LEC, an area that is connected with many other neocortical regions. LFPs were recorded simultaneously from the three brain regions while rats formed an association between an auditory stimulus (CS and eyelid stimulation (US in a trace eyeblink conditioning paradigm, as well as during retention one month following learning. Over the course of learning, theta oscillations in the LEC and mPFC became strongly synchronized following the presentation of the CS on trials in which rats exhibited a conditioned response (CR, and this strengthened synchronization was also observed during retention one month after learning. In contrast, CS-evoked theta synchronization between the LEC and HPC decreased with learning. Our results suggest that the communication between the LEC and mPFC is strengthened with learning whereas the communication between the LEC and HPC is concomitantly weakened, suggesting that enhanced LEC-mPFC communication may be a key process for theoretically-proposed neocortical reorganization accompanying encoding and consolidation of a memory.

  15. Sensory representation and learning-related plasticity in mushroom body extrinsic feedback neurons of the protocerebral tract.

    Science.gov (United States)

    Haehnel, Melanie; Menzel, Randolf

    2010-01-01

    Gamma-aminobutyric acid immunoreactive feedback neurons of the protocerebral tract are a major component of the honeybee mushroom body. They have been shown to be subject to learning-related plasticity and provide putative inhibitory input to Kenyon cells and the pedunculus extrinsic neuron, PE1. We hypothesize, that learning-related modulation in these neurons is mediated by varying the amount of inhibition provided by feedback neurons. We performed Ca(2+) imaging recordings of populations of neurons of the protocerebral-calycal tract (PCT) while the bees were conditioned in an appetitive olfactory paradigm and their behavioral responses were quantified using electromyographic recordings from M17, the muscle which controls the proboscis extension response. The results corroborate findings from electrophysiological studies showing that PCT neurons respond to sucrose and odor stimuli. The odor responses are concentration dependent. Odor and sucrose responses are modulated by repeated stimulus presentations. Furthermore, animals that learned to associate an odor with sucrose reward responded to the repeated presentations of the rewarded odor with less depression than they did to an unrewarded and a control odor.

  16. Hippocampal neurons exposed to the environmental contaminants methylmercury and polychlorinated biphenyls undergo cell death via parallel activation of calpains and lysosomal proteases.

    Science.gov (United States)

    Tofighi, Roshan; Johansson, Carolina; Goldoni, Matteo; Ibrahim, Wan Norhamidah Wan; Gogvadze, Vladimir; Mutti, Antonio; Ceccatelli, Sandra

    2011-01-01

    Methylmercury (MeHg) and polychlorinated biphenyls (PCBs) are widespread environmental pollutants commonly found as contaminants in the same food sources. Even though their neurotoxic effects are established, the mechanisms of action are not fully understood. In the present study, we have used the mouse hippocampal neuronal cell line HT22 to investigate the mechanisms of neuronal death induced by MeHg, PCB 153, and PCB 126, alone or in combination. All chemicals induced cell death with morphological changes compatible with either apoptosis or necrosis. Mitochondrial functions were impaired as shown by the significant decrease in mitochondrial Ca²+ uptake capacity and ATP levels. MeHg, but not the PCBs, induced loss of mitochondrial membrane potential and release of cytochrome c into the cytosol. Also, pre-treatment with the antioxidant MnTBAP was protective only against cell death induced by MeHg. While caspase activation was absent, the Ca²+-dependent proteases calpains were activated after exposure to MeHg or the selected PCBs. Furthermore, lysosomal disruption was observed in the exposed cells. Accordingly, pre-treatment with the calpain specific inhibitor PD150606 and/or the cathepsin D inhibitor Pepstatin protected against the cytotoxicity of MeHg and PCBs, and the protection was significantly enhanced when the two inhibitors were combined. Simultaneous exposures to lower doses of MeHg and PCBs suggested mostly antagonistic interactions. Taken together, these data indicate that MeHg and PCBs induce caspase-independent cell death via parallel activation of calpains and lysosomal proteases, and that in this model oxidative stress does not play a major role in PCB toxicity.

  17. Pontomesencephalic Tegmental Afferents to VTA Non-dopamine Neurons Are Necessary for Appetitive Pavlovian Learning

    Directory of Open Access Journals (Sweden)

    Hau-Jie Yau

    2016-09-01

    Full Text Available The ventral tegmental area (VTA receives phenotypically distinct innervations from the pedunculopontine tegmental nucleus (PPTg. While PPTg-to-VTA inputs are thought to play a critical role in stimulus-reward learning, direct evidence linking PPTg-to-VTA phenotypically distinct inputs in the learning process remains lacking. Here, we used optogenetic approaches to investigate the functional contribution of PPTg excitatory and inhibitory inputs to the VTA in appetitive Pavlovian conditioning. We show that photoinhibition of PPTg-to-VTA cholinergic or glutamatergic inputs during cue presentation dampens the development of anticipatory approach responding to the food receptacle during the cue. Furthermore, we employed in vivo optetrode recordings to show that photoinhibition of PPTg cholinergic or glutamatergic inputs significantly decreases VTA non-dopamine (non-DA neural activity. Consistently, photoinhibition of VTA non-DA neurons disrupts the development of cue-elicited anticipatory approach responding. Taken together, our study reveals a crucial regulatory mechanism by PPTg excitatory inputs onto VTA non-DA neurons during appetitive Pavlovian conditioning.

  18. Pontomesencephalic Tegmental Afferents to VTA Non-dopamine Neurons Are Necessary for Appetitive Pavlovian Learning.

    Science.gov (United States)

    Yau, Hau-Jie; Wang, Dong V; Tsou, Jen-Hui; Chuang, Yi-Fang; Chen, Billy T; Deisseroth, Karl; Ikemoto, Satoshi; Bonci, Antonello

    2016-09-01

    The ventral tegmental area (VTA) receives phenotypically distinct innervations from the pedunculopontine tegmental nucleus (PPTg). While PPTg-to-VTA inputs are thought to play a critical role in stimulus-reward learning, direct evidence linking PPTg-to-VTA phenotypically distinct inputs in the learning process remains lacking. Here, we used optogenetic approaches to investigate the functional contribution of PPTg excitatory and inhibitory inputs to the VTA in appetitive Pavlovian conditioning. We show that photoinhibition of PPTg-to-VTA cholinergic or glutamatergic inputs during cue presentation dampens the development of anticipatory approach responding to the food receptacle during the cue. Furthermore, we employed in vivo optetrode recordings to show that photoinhibition of PPTg cholinergic or glutamatergic inputs significantly decreases VTA non-dopamine (non-DA) neural activity. Consistently, photoinhibition of VTA non-DA neurons disrupts the development of cue-elicited anticipatory approach responding. Taken together, our study reveals a crucial regulatory mechanism by PPTg excitatory inputs onto VTA non-DA neurons during appetitive Pavlovian conditioning.

  19. Scalable, incremental learning with MapReduce parallelization for cell detection in high-resolution 3D microscopy data

    KAUST Repository

    Sung, Chul

    2013-08-01

    Accurate estimation of neuronal count and distribution is central to the understanding of the organization and layout of cortical maps in the brain, and changes in the cell population induced by brain disorders. High-throughput 3D microscopy techniques such as Knife-Edge Scanning Microscopy (KESM) are enabling whole-brain survey of neuronal distributions. Data from such techniques pose serious challenges to quantitative analysis due to the massive, growing, and sparsely labeled nature of the data. In this paper, we present a scalable, incremental learning algorithm for cell body detection that can address these issues. Our algorithm is computationally efficient (linear mapping, non-iterative) and does not require retraining (unlike gradient-based approaches) or retention of old raw data (unlike instance-based learning). We tested our algorithm on our rat brain Nissl data set, showing superior performance compared to an artificial neural network-based benchmark, and also demonstrated robust performance in a scenario where the data set is rapidly growing in size. Our algorithm is also highly parallelizable due to its incremental nature, and we demonstrated this empirically using a MapReduce-based implementation of the algorithm. We expect our scalable, incremental learning approach to be widely applicable to medical imaging domains where there is a constant flux of new data. © 2013 IEEE.

  20. Synaptic Ensemble Underlying the Selection and Consolidation of Neuronal Circuits during Learning

    Science.gov (United States)

    Hoshiba, Yoshio; Wada, Takeyoshi; Hayashi-Takagi, Akiko

    2017-01-01

    Memories are crucial to the cognitive essence of who we are as human beings. Accumulating evidence has suggested that memories are stored as a subset of neurons that probably fire together in the same ensemble. Such formation of cell ensembles must meet contradictory requirements of being plastic and responsive during learning, but also stable in order to maintain the memory. Although synaptic potentiation is presumed to be the cellular substrate for this process, the link between the two remains correlational. With the application of the latest optogenetic tools, it has been possible to collect direct evidence of the contributions of synaptic potentiation in the formation and consolidation of cell ensemble in a learning task specific manner. In this review, we summarize the current view of the causative role of synaptic plasticity as the cellular mechanism underlying the encoding of memory and recalling of learned memories. In particular, we will be focusing on the latest optoprobe developed for the visualization of such “synaptic ensembles.” We further discuss how a new synaptic ensemble could contribute to the formation of cell ensembles during learning and memory. With the development and application of novel research tools in the future, studies on synaptic ensembles will pioneer new discoveries, eventually leading to a comprehensive understanding of how the brain works. PMID:28303092

  1. Neuron self-learning PSD control for backside width of weld pool in pulsed GTAW with wire filler

    Institute of Scientific and Technical Information of China (English)

    张广军; 陈善本; 吴林

    2003-01-01

    In this paper, the weld pool shape control by intelligent strategy was studied. A neuron self-learning PSD controller for backside width of weld pool in pulsed GTAW with wire filler was designed. The PSD control arithmetic was analyzed, simulating experiment by MATLAB software was done, and the validating experiments on varied heat sink workpiece and varied gap workpiece were successfully implemented. The study results show that the neuron self-learning PSD control method can attain a perfect control effect under different set values and conditions, and is suitable for the welding process with the varied structure and coefficients of control model.

  2. Striatal and Tegmental Neurons Code Critical Signals for Temporal-Difference Learning of State Value in Domestic Chicks

    Science.gov (United States)

    Wen, Chentao; Ogura, Yukiko; Matsushima, Toshiya

    2016-01-01

    To ensure survival, animals must update the internal representations of their environment in a trial-and-error fashion. Psychological studies of associative learning and neurophysiological analyses of dopaminergic neurons have suggested that this updating process involves the temporal-difference (TD) method in the basal ganglia network. However, the way in which the component variables of the TD method are implemented at the neuronal level is unclear. To investigate the underlying neural mechanisms, we trained domestic chicks to associate color cues with food rewards. We recorded neuronal activities from the medial striatum or tegmentum in a freely behaving condition and examined how reward omission changed neuronal firing. To compare neuronal activities with the signals assumed in the TD method, we simulated the behavioral task in the form of a finite sequence composed of discrete steps of time. The three signals assumed in the simulated task were the prediction signal, the target signal for updating, and the TD-error signal. In both the medial striatum and tegmentum, the majority of recorded neurons were categorized into three types according to their fitness for three models, though these neurons tended to form a continuum spectrum without distinct differences in the firing rate. Specifically, two types of striatal neurons successfully mimicked the target signal and the prediction signal. A linear summation of these two types of striatum neurons was a good fit for the activity of one type of tegmental neurons mimicking the TD-error signal. The present study thus demonstrates that the striatum and tegmentum can convey the signals critically required for the TD method. Based on the theoretical and neurophysiological studies, together with tract-tracing data, we propose a novel model to explain how the convergence of signals represented in the striatum could lead to the computation of TD error in tegmental dopaminergic neurons. PMID:27877100

  3. Striatal and Tegmental Neurons Code Critical Signals for Temporal-Difference Learning of State Value in Domestic Chicks

    Directory of Open Access Journals (Sweden)

    Chentao Wen

    2016-11-01

    Full Text Available To ensure survival, animals must update the internal representations of their environment in a trial-and-error fashion. Psychological studies of associative learning and neurophysiological analyses of dopaminergic neurons have suggested that this updating process involves the temporal-difference (TD method in the basal ganglia network. However, the way in which the component variables of the TD method are implemented at the neuronal level is unclear. To investigate the underlying neural mechanisms, we trained domestic chicks to associate color cues with food rewards. We recorded neuronal activities from the medial striatum or tegmentum in a freely behaving condition and examined how reward omission changed neuronal firing. To compare neuronal activities with the signals assumed in the TD method, we simulated the behavioral task in the form of a finite sequence composed of discrete steps of time. The three signals assumed in the simulated task were the prediction signal, the target signal for updating, and the TD-error signal. In both the medial striatum and tegmentum, the majority of recorded neurons were categorized into three types according to their fitness for three models, though these neurons tended to form a continuum spectrum without distinct differences in the firing rate. Specifically, two types of striatal neurons successfully mimicked the target signal and the prediction signal. A linear summation of these two types of striatum neurons was a good fit for the activity of one type of tegmental neurons mimicking the TD-error signal. The present study thus demonstrates that the striatum and tegmentum can convey the signals critically required for the TD method. Based on the theoretical and neurophysiological studies, together with tract-tracing data, we propose a novel model to explain how the convergence of signals represented in the striatum could lead to the computation of TD error in tegmental dopaminergic neurons.

  4. New Hippocampal Neurons Are Not Obligatory for Memory Formation; Cyclin D2 Knockout Mice with No Adult Brain Neurogenesis Show Learning

    Science.gov (United States)

    Jaholkowski, Piotr; Kiryk, Anna; Jedynak, Paulina; Abdallah, Nada M. Ben; Knapska, Ewelina; Kowalczyk, Anna; Piechal, Agnieszka; Blecharz-Klin, Kamilla; Figiel, Izabela; Lioudyno, Victoria; Widy-Tyszkiewicz, Ewa; Wilczynski, Grzegorz M.; Lipp, Hans-Peter; Kaczmarek, Leszek; Filipkowski, Robert K.

    2009-01-01

    The role of adult brain neurogenesis (generating new neurons) in learning and memory appears to be quite firmly established in spite of some criticism and lack of understanding of what the new neurons serve the brain for. Also, the few experiments showing that blocking adult neurogenesis causes learning deficits used irradiation and various drugs…

  5. Diverse assessment and active student engagement sustain deep learning: A comparative study of outcomes in two parallel introductory biochemistry courses.

    Science.gov (United States)

    Bevan, Samantha J; Chan, Cecilia W L; Tanner, Julian A

    2014-01-01

    Although there is increasing evidence for a relationship between courses that emphasize student engagement and achievement of student deep learning, there is a paucity of quantitative comparative studies in a biochemistry and molecular biology context. Here, we present a pedagogical study in two contrasting parallel biochemistry introductory courses to compare student surface and deep learning. Surface and deep learning were measured quantitatively by a study process questionnaire at the start and end of the semester, and qualitatively by questionnaires and interviews with students. In the traditional lecture/examination based course, there was a dramatic shift to surface learning approaches through the semester. In the course that emphasized student engagement and adopted multiple forms of assessment, a preference for deep learning was sustained with only a small reduction through the semester. Such evidence for the benefits of implementing student engagement and more diverse non-examination based assessment has important implications for the design, delivery, and renewal of introductory courses in biochemistry and molecular biology.

  6. Neuroserpin, a brain-associated inhibitor of tissue plasminogen activator is localized primarily in neurons. Implications for the regulation of motor learning and neuronal survival.

    Science.gov (United States)

    Hastings, G A; Coleman, T A; Haudenschild, C C; Stefansson, S; Smith, E P; Barthlow, R; Cherry, S; Sandkvist, M; Lawrence, D A

    1997-12-26

    A cDNA clone for the serine proteinase inhibitor (serpin), neuroserpin, was isolated from a human whole brain cDNA library, and recombinant protein was expressed in insect cells. The purified protein is an efficient inhibitor of tissue type plasminogen activator (tPA), having an apparent second-order rate constant of 6. 2 x 10(5) M-1 s-1 for the two-chain form. However, unlike other known plasminogen activator inhibitors, neuroserpin is a more effective inactivator of tPA than of urokinase-type plasminogen activator. Neuroserpin also effectively inhibited trypsin and nerve growth factor-gamma but reacted only slowly with plasmin and thrombin. Northern blot analysis showed a 1.8 kilobase messenger RNA expressed predominantly in adult human brain and spinal cord, and immunohistochemical studies of normal mouse tissue detected strong staining primarily in neuronal cells with occasionally positive microglial cells. Staining was most prominent in the ependymal cells of the choroid plexus, Purkinje cells of the cerebellum, select neurons of the hypothalamus and hippocampus, and in the myelinated axons of the commissura. Expression of tPA within these regions is reported to be high and has previously been correlated with both motor learning and neuronal survival. Taken together, these data suggest that neuroserpin is likely to be a critical regulator of tPA activity in the central nervous system, and as such may play an important role in neuronal plasticity and/or maintenance.

  7. Do premotor interneurons act in parallel on spinal motoneurons and on dorsal horn spinocerebellar and spinocervical tract neurons in the cat?

    Science.gov (United States)

    Krutki, Piotr; Jelen, Sabina; Jankowska, Elzbieta

    2011-04-01

    It has previously been established that ventral spinocerebellar tract (VSCT) neurons and dorsal spinocerebellar tract neurons located in Clarke's column (CC DSCT neurons) forward information on actions of premotor interneurons in reflex pathways from muscle afferents on α-motoneurons. Whether DSCT neurons located in the dorsal horn (dh DSCT neurons) and spinocervical tract (SCT) neurons are involved in forwarding similar feedback information has not yet been investigated. The aim of the present study was therefore to examine the input from premotor interneurons to these neurons. Electrical stimuli were applied within major hindlimb motor nuclei to activate axon-collaterals of interneurons projecting to these nuclei, and intracellular records were obtained from dh DSCT and SCT neurons. Direct actions of the stimulated interneurons were differentiated from indirect actions by latencies of postsynaptic potentials evoked by intraspinal stimuli and by the absence or presence of temporal facilitation. Direct actions of premotor interneurons were found in a smaller proportion of dh DSCT than of CC DSCT neurons. However, they were evoked by both excitatory and inhibitory interneurons, whereas only inhibitory premotor interneurons were previously found to affect CC DSCT neurons [as indicated by monosynaptic excitatory postsynaptic potentials (EPSPs) and inhibitory postsynaptic potentials (IPSPs) in dh DSCT and only IPSPs in CC DSCT neurons]. No effects of premotor interneurons were found in SCT neurons, since monosynaptic EPSPs or IPSPs were only evoked in them by stimuli applied outside motor nuclei. The study thus reveals a considerable differentiation of feedback information provided by different populations of ascending tract neurons.

  8. Neuronal Nitric-Oxide Synthase Deficiency Impairs the Long-Term Memory of Olfactory Fear Learning and Increases Odor Generalization

    Science.gov (United States)

    Pavesi, Eloisa; Heldt, Scott A.; Fletcher, Max L.

    2013-01-01

    Experience-induced changes associated with odor learning are mediated by a number of signaling molecules, including nitric oxide (NO), which is predominantly synthesized by neuronal nitric oxide synthase (nNOS) in the brain. In the current study, we investigated the role of nNOS in the acquisition and retention of conditioned olfactory fear. Mice…

  9. The Growth of m-Learning and the Growth of Mobile Computing: Parallel Developments

    Science.gov (United States)

    Caudill, Jason G.

    2007-01-01

    m-Learning is made possible by the existence and application of mobile hardware and networking technology. By exploring the capabilities of these technologies, it is possible to construct a picture of how different components of m-Learning can be implemented. This paper will explore the major technologies currently in use: portable digital…

  10. The Value of Aesthetic Teacher Learning: Drawing a Parallel between the Teaching and Writing Process

    Science.gov (United States)

    Yoo, Joann

    2014-01-01

    Although teacher learning has often been overlooked in discussions surrounding classroom practice, it is believed that learning cultivates the resilience and vitality needed for teachers to thrive. Teachers have often been required to demonstrate a high level of skill and professionalism as they orchestrate tasks that maximise student engagement.…

  11. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models.

    Science.gov (United States)

    Hanuschkin, A; Ganguli, S; Hahnloser, R H R

    2013-01-01

    Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.

  12. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models

    Directory of Open Access Journals (Sweden)

    Alexander eHanuschkin

    2013-06-01

    Full Text Available Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: Random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, they allow for imitating arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions.Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird’s own song

  13. Learning tasks as a possible treatment for DNA lesions induced by oxidative stress in hippocampal neurons

    Institute of Scientific and Technical Information of China (English)

    DragoCrneci; Radu Silaghi-Dumitrescu

    2013-01-01

    Reactive oxygen species have been implicated in conditions ranging from cardiovascular dysfunc-tion, arthritis, cancer, to aging and age-related disorders. The organism developed several path-ways to counteract these effects, with base excision repair being responsible for repairing one of the major base lesions (8-oxoG) in al organisms. Epidemiological evidence suggests that cognitive stimulation makes the brain more resilient to damage or degeneration. Recent studies have linked enriched environment to reduction of oxidative stressin neurons of mice with Alzheimer’s dis-ease-like disease, but given its complexity it is not clear what specific aspect of enriched environ-ment has therapeutic effects. Studies from molecular biology have shown that the protein p300, which is a transcription co-activator required for consolidation of memories during specific learning tasks, is at the same time involved in DNA replication and repair, playing a central role in the long-patch pathway of base excision repair. Based on the evidence, we propose that learning tasks such as novel object recognition could be tested as possible methods of base excision repair faci-litation, hence inducing DNA repair in the hippocampal neurons. If this method proves to be effective, it could be the start for designing similar tasks for humans, as a behavioral therapeutic complement to the classical drug-based therapy in treating neurodegenerative disorders. This review presents the current status of therapeutic methods used in treating neurodegenerative diseases induced by reactive oxygen species and proposes a new approach based on existing data.

  14. Synaptic Neurotransmission Depression in Ventral Tegmental Dopamine Neurons and Cannabinoid-Associated Addictive Learning

    Science.gov (United States)

    Liu, Zhiqiang; Han, Jing; Jia, Lintao; Maillet, Jean-Christian; Bai, Guang; Xu, Lin; Jia, Zhengping; Zheng, Qiaohua; Zhang, Wandong; Monette, Robert; Merali, Zul; Zhu, Zhou; Wang, Wei; Ren, Wei; Zhang, Xia

    2010-01-01

    Drug addiction is an association of compulsive drug use with long-term associative learning/memory. Multiple forms of learning/memory are primarily subserved by activity- or experience-dependent synaptic long-term potentiation (LTP) and long-term depression (LTD). Recent studies suggest LTP expression in locally activated glutamate synapses onto dopamine neurons (local Glu-DA synapses) of the midbrain ventral tegmental area (VTA) following a single or chronic exposure to many drugs of abuse, whereas a single exposure to cannabinoid did not significantly affect synaptic plasticity at these synapses. It is unknown whether chronic exposure of cannabis (marijuana or cannabinoids), the most commonly used illicit drug worldwide, induce LTP or LTD at these synapses. More importantly, whether such alterations in VTA synaptic plasticity causatively contribute to drug addictive behavior has not previously been addressed. Here we show in rats that chronic cannabinoid exposure activates VTA cannabinoid CB1 receptors to induce transient neurotransmission depression at VTA local Glu-DA synapses through activation of NMDA receptors and subsequent endocytosis of AMPA receptor GluR2 subunits. A GluR2-derived peptide blocks cannabinoid-induced VTA synaptic depression and conditioned place preference, i.e., learning to associate drug exposure with environmental cues. These data not only provide the first evidence, to our knowledge, that NMDA receptor-dependent synaptic depression at VTA dopamine circuitry requires GluR2 endocytosis, but also suggest an essential contribution of such synaptic depression to cannabinoid-associated addictive learning, in addition to pointing to novel pharmacological strategies for the treatment of cannabis addiction. PMID:21187978

  15. Synaptic neurotransmission depression in ventral tegmental dopamine neurons and cannabinoid-associated addictive learning.

    Directory of Open Access Journals (Sweden)

    Zhiqiang Liu

    Full Text Available Drug addiction is an association of compulsive drug use with long-term associative learning/memory. Multiple forms of learning/memory are primarily subserved by activity- or experience-dependent synaptic long-term potentiation (LTP and long-term depression (LTD. Recent studies suggest LTP expression in locally activated glutamate synapses onto dopamine neurons (local Glu-DA synapses of the midbrain ventral tegmental area (VTA following a single or chronic exposure to many drugs of abuse, whereas a single exposure to cannabinoid did not significantly affect synaptic plasticity at these synapses. It is unknown whether chronic exposure of cannabis (marijuana or cannabinoids, the most commonly used illicit drug worldwide, induce LTP or LTD at these synapses. More importantly, whether such alterations in VTA synaptic plasticity causatively contribute to drug addictive behavior has not previously been addressed. Here we show in rats that chronic cannabinoid exposure activates VTA cannabinoid CB1 receptors to induce transient neurotransmission depression at VTA local Glu-DA synapses through activation of NMDA receptors and subsequent endocytosis of AMPA receptor GluR2 subunits. A GluR2-derived peptide blocks cannabinoid-induced VTA synaptic depression and conditioned place preference, i.e., learning to associate drug exposure with environmental cues. These data not only provide the first evidence, to our knowledge, that NMDA receptor-dependent synaptic depression at VTA dopamine circuitry requires GluR2 endocytosis, but also suggest an essential contribution of such synaptic depression to cannabinoid-associated addictive learning, in addition to pointing to novel pharmacological strategies for the treatment of cannabis addiction.

  16. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    OpenAIRE

    Yang Liu; Jie Yang; Yuan Huang; Lixiong Xu; Siguang Li; Man Qi

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing mo...

  17. Development of Parallel Learning Strategies Curricula Using Videodisc and Standard Off-Line Formats.

    Science.gov (United States)

    1985-12-01

    AOM-ft67 246 DEVELOPMIENT OF PARALLEL LERNING STRATEGIES CURRICULA v𔃻 USING YIDEODISC RND.. (U) HUMAN RESOURCES RESEARCH ORGANIZATION ALEXANDRIA VA...research and development to produce and evaluate applications of an advanced multimedia , computer-based technology for basic skills education, which included...evaluations, the second task was the preparation of printed, off-line materials for all of the lessons. The last task was the production of two videotapes

  18. The Growth of m-Learning and the Growth of Mobile Computing: Parallel developments

    Directory of Open Access Journals (Sweden)

    Jason G. Caudill

    2007-06-01

    Full Text Available m-Learning is made possible by the existence and application of mobile hardware and networking technology. By exploring the capabilities of these technologies, it is possible to construct a picture of how different components of m-Learning can be implemented. This paper will explore the major technologies currently in use: portable digital assistants (PDAs, Short Message Service (SMS messaging via mobile phone, and podcasts via MP3 players.

  19. The Role of Emotion, Vision and Touch in Movement Learning Neuroplasticity and the Mirror Neuron System

    Directory of Open Access Journals (Sweden)

    Jeanne Masterson

    2015-09-01

    Full Text Available The brains ability to restructure, linked with positive emotional states and techniques to enhance the activation of the mirror neuron system, allows for increased motor learning. This study examines the effects of enhancing goal-oriented movement and examines various techniques used to strengthen neural networks during such movement practices. Goal oriented movement practices are commonly used when engaging in Pilates. Participants will be divided between three groups to help identify potential differences between individuals who receive sensory cueing and those who do not receive sensory cueing. It is hypothesized there will be a difference between participants who receive emotional priming and the visual enhancement of touch versus those who are not primed. Specifically, it is predicted that participants who are primed will demonstrate enhanced skill acquisition of learning during Pilates as opposed to participants assigned to either the regular Pilates condition or the control condition. These types of studies are important in that they can possibly be applied in many motor learning areas, such as physical therapy, sports, general physical activity for health, and possibly even psychological interventions. Results of the current study supported the main hypothesis in that increased physical task skill was obtained in the group receiving positive emotional cueing and visual enhancement of touch. Likewise there was a statically significant increase in physical self-efficacy scores in the group receiving these independent variables over that of the group receiving regular Pilates only. The results of the current study show that a physical task environment such as that in a Pilates class can be improved by adding techniques that enhance sensory input to the regions of the brain that recognize and process motor activity. Also that with an increase in physical skill comes a heightened physical self-efficacy. In theory this was due to an

  20. A Re-configurable On-line Learning Spiking Neuromorphic Processor comprising 256 neurons and 128K synapses

    Directory of Open Access Journals (Sweden)

    Ning eQiao

    2015-04-01

    Full Text Available Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm 2 , and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.

  1. Early-Age Running Enhances Activity of Adult-Born Dentate Granule Neurons Following Learning in Rats.

    Science.gov (United States)

    Shevtsova, Olga; Tan, Yao-Fang; Merkley, Christina M; Winocur, Gordon; Wojtowicz, J Martin

    2017-01-01

    Cognitive reserve, the brain's capacity to draw on enriching experiences during youth, is believed to protect against memory loss associated with a decline in hippocampal function, as seen in normal aging and neurodegenerative disease. Adult neurogenesis has been suggested as a specific mechanism involved in cognitive (or neurogenic) reserve. The first objective of this study was to compare learning-related neuronal activity in adult-born versus developmentally born hippocampal neurons in juvenile male rats that had engaged in extensive running activity during early development or reared in a standard laboratory environment. The second objective was to investigate the long-term effect of exercise in rats on learning and memory of a contextual fear (CF) response later in adulthood. These aims address the important question as to whether exercise in early life is sufficient to build a reserve that protects against the process of cognitive aging. The results reveal a long-term effect of early running on adult-born dentate granule neurons and a special role for adult-born neurons in contextual memory, in a manner that is consistent with the neurogenic reserve hypothesis.

  2. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  3. Statistical learning of serial visual transitions by neurons in monkey inferotemporal cortex.

    Science.gov (United States)

    Meyer, Travis; Ramachandran, Suchitra; Olson, Carl R

    2014-07-09

    If monkeys repeatedly, over the course of weeks, view displays in which two images appear in fixed sequence, then neurons of inferotemporal cortex (ITC) come to exhibit prediction suppression. The response to the trailing image is weaker if it follows the leading image with which it was paired during training than if it follows some other leading image. Prediction suppression is a plausible neural mechanism for statistical learning of visual transitions such as has been demonstrated in behavioral studies of human infants and adults. However, in the human studies, subjects are exposed to continuous sequences in which the same image can be both predicted and predicting and statistical dependency can exist between nonadjacent items. The aim of the present study was to investigate whether prediction suppression in ITC develops under such circumstances. To resolve this issue, we exposed monkeys repeatedly to triplets of images presented in fixed order. The results indicate that prediction suppression can be induced by training not only with pairs of images but also with longer sequences.

  4. Top-down inputs enhance orientation selectivity in neurons of the primary visual cortex during perceptual learning.

    Directory of Open Access Journals (Sweden)

    Samat Moldakarimov

    2014-08-01

    Full Text Available Perceptual learning has been used to probe the mechanisms of cortical plasticity in the adult brain. Feedback projections are ubiquitous in the cortex, but little is known about their role in cortical plasticity. Here we explore the hypothesis that learning visual orientation discrimination involves learning-dependent plasticity of top-down feedback inputs from higher cortical areas, serving a different function from plasticity due to changes in recurrent connections within a cortical area. In a Hodgkin-Huxley-based spiking neural network model of visual cortex, we show that modulation of feedback inputs to V1 from higher cortical areas results in shunting inhibition in V1 neurons, which changes the response properties of V1 neurons. The orientation selectivity of V1 neurons is enhanced without changing orientation preference, preserving the topographic organizations in V1. These results provide new insights to the mechanisms of plasticity in the adult brain, reconciling apparently inconsistent experiments and providing a new hypothesis for a functional role of the feedback connections.

  5. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    Science.gov (United States)

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a

  6. Both increases in immature dentate neuron number and decreases of immobility time in the forced swim test occurred in parallel after environmental enrichment of mice.

    Science.gov (United States)

    Llorens-Martín, M V; Rueda, N; Martínez-Cué, C; Torres-Alemán, I; Flórez, J; Trejo, J L

    2007-07-13

    A direct relation between the rate of adult hippocampal neurogenesis in mice and the immobility time in a forced swim test after living in an enriched environment has been suggested previously. In the present work, young adult mice living in an enriched environment for 2 months developed considerably more immature differentiating neurons (doublecortin-positive, DCX(+)) than control, non-enriched animals. Furthermore, we found that the more DCX(+) cells they possessed, the lower the immobility time they scored in the forced swim test. This DCX(+) subpopulation is composed of mostly differentiating dentate neurons independently of the birthdates of every individual cell. However, variations found in this subpopulation were not the result of a general effect on the survival of any newborn neuron in the granule cell layer, as 5-bromo-2-deoxyuridine (BrdU)-labeled cells born during a narrow time window included in the longer lifetime period of DCX(+) cells, were not significantly modified after enrichment. In contrast, the survival of the mature population of neurons in the granule cell layer of the dentate gyrus in enriched animals increased, although this did not influence their performance in the Porsolt test, nor did it influence the dentate gyrus volume or granule neuronal nuclei size. These results indicate that the population of immature, differentiating neurons in the adult hippocampus is one factor directly related to the protective effect of an enriched environment against a highly stressful event.

  7. Learning-Induced Gene Expression in the Hippocampus Reveals a Role of Neuron -Astrocyte Metabolic Coupling in Long Term Memory.

    Directory of Open Access Journals (Sweden)

    Monika Tadi

    Full Text Available We examined the expression of genes related to brain energy metabolism and particularly those encoding glia (astrocyte-specific functions in the dorsal hippocampus subsequent to learning. Context-dependent avoidance behavior was tested in mice using the step-through Inhibitory Avoidance (IA paradigm. Animals were sacrificed 3, 9, 24, or 72 hours after training or 3 hours after retention testing. The quantitative determination of mRNA levels revealed learning-induced changes in the expression of genes thought to be involved in astrocyte-neuron metabolic coupling in a time dependent manner. Twenty four hours following IA training, an enhanced gene expression was seen, particularly for genes encoding monocarboxylate transporters 1 and 4 (MCT1, MCT4, alpha2 subunit of the Na/K-ATPase and glucose transporter type 1. To assess the functional role for one of these genes in learning, we studied MCT1 deficient mice and found that they exhibit impaired memory in the inhibitory avoidance task. Together, these observations indicate that neuron-glia metabolic coupling undergoes metabolic adaptations following learning as indicated by the change in expression of key metabolic genes.

  8. Learning-Induced Gene Expression in the Hippocampus Reveals a Role of Neuron -Astrocyte Metabolic Coupling in Long Term Memory

    KAUST Repository

    Tadi, Monika

    2015-10-29

    We examined the expression of genes related to brain energy metabolism and particularly those encoding glia (astrocyte)-specific functions in the dorsal hippocampus subsequent to learning. Context-dependent avoidance behavior was tested in mice using the step-through Inhibitory Avoidance (IA) paradigm. Animals were sacrificed 3, 9, 24, or 72 hours after training or 3 hours after retention testing. The quantitative determination of mRNA levels revealed learning-induced changes in the expression of genes thought to be involved in astrocyte-neuron metabolic coupling in a time dependent manner. Twenty four hours following IA training, an enhanced gene expression was seen, particularly for genes encoding monocarboxylate transporters 1 and 4 (MCT1, MCT4), alpha2 subunit of the Na/K-ATPase and glucose transporter type 1. To assess the functional role for one of these genes in learning, we studied MCT1 deficient mice and found that they exhibit impaired memory in the inhibitory avoidance task. Together, these observations indicate that neuron-glia metabolic coupling undergoes metabolic adaptations following learning as indicated by the change in expression of key metabolic genes.

  9. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  10. Deletion of glycine transporter 1 (GlyT1) in forebrain neurons facilitates reversal learning: enhanced cognitive adaptability?

    Science.gov (United States)

    Singer, Philipp; Boison, Detlev; Möhler, Hanns; Feldon, Joram; Yee, Benjamin K

    2009-10-01

    Local availability of glycine near N-methyl-D-aspartate receptors (NMDARs) is partly regulated by neuronal glycine transporter 1 (GlyT1), which can therefore modulate NMDAR function because binding to the glycine site of the NMDAR is necessary for channel activation. Disrupting GlyT1 in forebrain neurons has been shown to enhance Pavlovian conditioning and object recognition memory. Here, the authors report that the same genetic manipulation facilitated reversal learning in the water maze test of reference memory, but did not lead to any clear improvement in a working memory version of the water maze test. Facilitation in a nonspatial discrimination reversal task conducted on a T maze was also observed, supporting the conclusion that forebrain neuronal GlyT1 may modulate the flexibility in (new) learning and relevant mnemonic functions. One possibility is that these phenotypes may reflect reduced susceptibility to certain forms of proactive interference. This may be relevant for the suggested clinical application of GlyT1 inhibitors in the treatment of cognitive deficits, including schizophrenia, which is characterized by cognitive inflexibility in addition to the positive symptoms of the disease.

  11. Differential contributions of microglial and neuronal IKKβ to synaptic plasticity and associative learning in alert behaving mice.

    Science.gov (United States)

    Kyrargyri, Vasiliki; Vega-Flores, Germán; Gruart, Agnès; Delgado-García, José M; Probert, Lesley

    2015-04-01

    Microglia are CNS resident immune cells and a rich source of neuroactive mediators, but their contribution to physiological brain processes such as synaptic plasticity, learning, and memory is not fully understood. In this study, we used mice with partial depletion of IκB kinase β, the main activating kinase in the inducible NF-κB pathway, selectively in myeloid lineage cells (mIKKβKO) or excitatory neurons (nIKKβKO) to measure synaptic strength at hippocampal Schaffer collaterals during long-term potentiation (LTP) and instrumental conditioning in alert behaving individuals. Resting microglial cells in mIKKβKO mice showed less Iba1-immunoreactivity, and brain IL-1β mRNA levels were selectively reduced compared with controls. Measurement of field excitatory postsynaptic potentials (fEPSPs) evoked by stimulation of the CA3-CA1 synapse in mIKKβKO mice showed higher facilitation in response to paired pulses and enhanced LTP following high frequency stimulation. In contrast, nIKKβKO mice showed normal basic synaptic transmission and LTP induction but impairments in late LTP. To understand the consequences of such impairments in synaptic plasticity for learning and memory, we measured CA1 fEPSPs in behaving mice during instrumental conditioning. IKKβ was not necessary in either microglia or neurons for mice to learn lever-pressing (appetitive behavior) to obtain food (consummatory behavior) but was required in both for modification of their hippocampus-dependent appetitive, not consummatory behavior. Our results show that microglia, through IKKβ and therefore NF-κB activity, regulate hippocampal synaptic plasticity and that both microglia and neurons, through IKKβ, are necessary for animals to modify hippocampus-driven behavior during associative learning. © 2014 Wiley Periodicals, Inc.

  12. Effect of Cistanche Desertica Polysaccharides on Learning and Memory Functions and Ultrastructure of Cerebral Neurons in Experimental Aging Mice

    Institute of Scientific and Technical Information of China (English)

    孙云; 邓杨梅; 王德俊; 沈春锋; 刘晓梅; 张洪泉

    2001-01-01

    To observe the effects of Cistanche desertica polysaccharides (CDP) on the learning and memory functions and cerebral ultrastructure in experimental aging mice. Methods: CDP was administrated intragastrically 50 or 100 mg/kg per day for 64 successive days to experimental aging model mice induced by D-galactose, then the learning and memory functions of mice were estimated by step-down test and Y-maze test; organelles of brain tissue and cerebral ultrastructure were observed by transmission electron microscope and physical strength was determined by swimming test. Results: CDP could obviously enhance the learning and memory functions (P<0.01) and prolong the swimming time (P<0.05), decrease the number of lipofuscin and slow down the degeneration of mitochondria in neurons(P<0.05), and improve the degeneration of cerebral ultra-structure in aging mice. Conclusion: CDP could improve the impaired physiological function and alleviate cerebral morphological change in experimental aging mice.

  13. Parallel reinforcement learning with eligibility traces%一种基于资格迹的并行强化学习算法

    Institute of Scientific and Technical Information of China (English)

    杨旭东; 刘全; 李瑾

    2012-01-01

    Reinforcement learning is an important machine learning method. However, slow convergence has been one of the main challenges in the area of reinforcement learning. To improve the efficiency of existing reinforcement learning algorithms, a parallel reinforcement learning algorithm framework with eligibility traces is proposed. To take advantage of the inherent parallelism found in reinforcement learning algorithms with eligibility traces, multiple computing nodes are used together to take charge of the value function table and eligibility trace table. Some optimizations of the algorithm framework are given. The experimental results show that the proposed method has certain advantages compared to two other existing parallel reinforcement learning methods.%强化学习是一种重要的机器学习方法,然而在实际应用中,收敛速度缓慢是其主要不足之一.为了提高强化学习的效率,提出了一种基于资格迹的并行强化学习算法,并给出了算法实现的框架模型和一些可行的优化方法.由于使用资格迹的算法具有内在的并行性,可以使用多个计算结点分摊值函数表和资格迹表的更新工作,从而实现提高整个系统学习效率的目的.实验结果表明该方法与当前两种主要的并行强化学习算法相比具有一定的优势.

  14. Rivastigmine lowers Aβ and increases sAPPα levels, which parallel elevated synaptic markers and metabolic activity in degenerating primary rat neurons.

    Directory of Open Access Journals (Sweden)

    Jason A Bailey

    Full Text Available Overproduction of amyloid-β (Aβ protein in the brain has been hypothesized as the primary toxic insult that, via numerous mechanisms, produces cognitive deficits in Alzheimer's disease (AD. Cholinesterase inhibition is a primary strategy for treatment of AD, and specific compounds of this class have previously been demonstrated to influence Aβ precursor protein (APP processing and Aβ production. However, little information is available on the effects of rivastigmine, a dual acetylcholinesterase and butyrylcholinesterase inhibitor, on APP processing. As this drug is currently used to treat AD, characterization of its various activities is important to optimize its clinical utility. We have previously shown that rivastigmine can preserve or enhance neuronal and synaptic terminal markers in degenerating primary embryonic cerebrocortical cultures. Given previous reports on the effects of APP and Aβ on synapses, regulation of APP processing represents a plausible mechanism for the synaptic effects of rivastigmine. To test this hypothesis, we treated degenerating primary cultures with rivastigmine and measured secreted APP (sAPP and Aβ. Rivastigmine treatment increased metabolic activity in these cultured cells, and elevated APP secretion. Analysis of the two major forms of APP secreted by these cultures, attributed to neurons or glia based on molecular weight showed that rivastigmine treatment significantly increased neuronal relative to glial secreted APP. Furthermore, rivastigmine treatment increased α-secretase cleaved sAPPα and decreased Aβ secretion, suggesting a therapeutic mechanism wherein rivastigmine alters the relative activities of the secretase pathways. Assessment of sAPP levels in rodent CSF following once daily rivastigmine administration for 21 days confirmed that elevated levels of APP in cell culture translated in vivo. Taken together, rivastigmine treatment enhances neuronal sAPP and shifts APP processing toward the

  15. A bi-hemispheric neuronal network model of the cerebellum with spontaneous climbing fiber firing produces asymmetrical motor learning during robot control

    Science.gov (United States)

    Pinzon-Morales, Ruben-Dario; Hirata, Yutaka

    2014-01-01

    To acquire and maintain precise movement controls over a lifespan, changes in the physical and physiological characteristics of muscles must be compensated for adaptively. The cerebellum plays a crucial role in such adaptation. Changes in muscle characteristics are not always symmetrical. For example, it is unlikely that muscles that bend and straighten a joint will change to the same degree. Thus, different (i.e., asymmetrical) adaptation is required for bending and straightening motions. To date, little is known about the role of the cerebellum in asymmetrical adaptation. Here, we investigate the cerebellar mechanisms required for asymmetrical adaptation using a bi-hemispheric cerebellar neuronal network model (biCNN). The bi-hemispheric structure is inspired by the observation that lesioning one hemisphere reduces motor performance asymmetrically. The biCNN model was constructed to run in real-time and used to control an unstable two-wheeled balancing robot. The load of the robot and its environment were modified to create asymmetrical perturbations. Plasticity at parallel fiber-Purkinje cell synapses in the biCNN model was driven by error signal in the climbing fiber (cf) input. This cf input was configured to increase and decrease its firing rate from its spontaneous firing rate (approximately 1 Hz) with sensory errors in the preferred and non-preferred direction of each hemisphere, as demonstrated in the monkey cerebellum. Our results showed that asymmetrical conditions were successfully handled by the biCNN model, in contrast to a single hemisphere model or a classical non-adaptive proportional and derivative controller. Further, the spontaneous activity of the cf, while relatively small, was critical for balancing the contribution of each cerebellar hemisphere to the overall motor command sent to the robot. Eliminating the spontaneous activity compromised the asymmetrical learning capabilities of the biCNN model. Thus, we conclude that a bi

  16. An Energy-Efficient and Scalable Deep Learning/Inference Processor With Tetra-Parallel MIMD Architecture for Big Data Applications.

    Science.gov (United States)

    Park, Seong-Wook; Park, Junyoung; Bong, Kyeongryeol; Shin, Dongjoo; Lee, Jinmook; Choi, Sungpill; Yoo, Hoi-Jun

    2015-12-01

    Deep Learning algorithm is widely used for various pattern recognition applications such as text recognition, object recognition and action recognition because of its best-in-class recognition accuracy compared to hand-crafted algorithm and shallow learning based algorithms. Long learning time caused by its complex structure, however, limits its usage only in high-cost servers or many-core GPU platforms so far. On the other hand, the demand on customized pattern recognition within personal devices will grow gradually as more deep learning applications will be developed. This paper presents a SoC implementation to enable deep learning applications to run with low cost platforms such as mobile or portable devices. Different from conventional works which have adopted massively-parallel architecture, this work adopts task-flexible architecture and exploits multiple parallelism to cover complex functions of convolutional deep belief network which is one of popular deep learning/inference algorithms. In this paper, we implement the most energy-efficient deep learning and inference processor for wearable system. The implemented 2.5 mm × 4.0 mm deep learning/inference processor is fabricated using 65 nm 8-metal CMOS technology for a battery-powered platform with real-time deep inference and deep learning operation. It consumes 185 mW average power, and 213.1 mW peak power at 200 MHz operating frequency and 1.2 V supply voltage. It achieves 411.3 GOPS peak performance and 1.93 TOPS/W energy efficiency, which is 2.07× higher than the state-of-the-art.

  17. Neuronal Plasticity in the Mammalian Brain: Relevance to Behavioral Learning and Memory

    Science.gov (United States)

    Teyler, Timothy J.; Fountain, Stephen B.

    1987-01-01

    Data suggesting that different brain circuits may underlie different forms of learning and memory are reviewed. Several current theories of learning and memory with respect to hippocampal and other brain circuit involvement are considered. (PCB)

  18. Changes of learning and memory ability associated with neuronal nitric oxide synthase in brain tissues of rats with acute alcoholism

    Institute of Scientific and Technical Information of China (English)

    Shuang Li; Chunyang Xu; Dongliang Li; Xinjuan Li; Linyu Wei; Yuan Cheng

    2006-01-01

    BACKGROUD: Ethanol can influence neural development and the ability of learning and memory, but its mechanism of the neural toxicity is not clear till now. Endogenous nitric oxide (NO) as a gaseous messenger is proved to play an important role in the formation of synaptic plasticity, transference of neuronal information and the neural development, but excessive nitro oxide can result in neurotoxicity.OBJECTIVE: To observe the effects of acute alcoholism on the learning and memory ability and the content of neuronal nitric oxide synthase (nNOS) in brain tissue of rats.DESIGN: A randomized controlled animal experiment.SETTING: Department of Physiology, Xinxiang Medical College.MATERIALS; Eighteen male clean-degree SD rats of 18-22 weeks were raised adaptively for 2 days, and then randomly divided into control group (n = 8) and experimental group (n = 10). The nNOS immunohistochemical reagent was provided by Beijing Zhongshan Golden Bridge Biotechnology Co.,Ltd. Y-maze was produced by Suixi Zhenghua Apparatus Plant.METHODS: The experiment was carried out in the laboratory of the Department of Physiology, Xinxiang Medical College from June to October in 2005. ① Rats in the experimental group were intraperitoneally injected with ethanol (2.5 g/kg) which was dissolved in normal saline (20%). The loss of righting reflex and ataxia within 5 minutes indicated the successful model. Whereas rats in the control group were given saline of the same volume. ② Examinations of learning and memory ability: The Y-maze tests for learning and memory ability were performed at 6 hours after the models establishment. The rats were put into the Y-maze separately. The test was performed in a quiet and dark room. There was a lamp at the end of each of three pathways in Y-maze and the base of maze had electric net. All the lamps of the three pathways were turned on for 3 minutes and then turned off. One lamp was turned on randomly, and the other two delayed automatically. In 5 seconds

  19. "Celebration of the Neurons": The Application of Brain Based Learning in Classroom Environment

    Science.gov (United States)

    Duman, Bilal

    2007-01-01

    The purpose of this study is to investigate approaches and techniques related to how brain based learning used in classroom atmosphere. This general purpose were answered following the questions: (1) What is the aim of brain based learning? (2) What are general approaches and techniques that brain based learning used? and (3) How should be used…

  20. Teaching and learning the Hodgkin-Huxley model based on software developed in NEURON's programming language hoc.

    Science.gov (United States)

    Hernández, Oscar E; Zurek, Eduardo E

    2013-05-15

    We present a software tool called SENB, which allows the geometric and biophysical neuronal properties in a simple computational model of a Hodgkin-Huxley (HH) axon to be changed. The aim of this work is to develop a didactic and easy-to-use computational tool in the NEURON simulation environment, which allows graphical visualization of both the passive and active conduction parameters and the geometric characteristics of a cylindrical axon with HH properties. The SENB software offers several advantages for teaching and learning electrophysiology. First, SENB offers ease and flexibility in determining the number of stimuli. Second, SENB allows immediate and simultaneous visualization, in the same window and time frame, of the evolution of the electrophysiological variables. Third, SENB calculates parameters such as time and space constants, stimuli frequency, cellular area and volume, sodium and potassium equilibrium potentials, and propagation velocity of the action potentials. Furthermore, it allows the user to see all this information immediately in the main window. Finally, with just one click SENB can save an image of the main window as evidence. The SENB software is didactic and versatile, and can be used to improve and facilitate the teaching and learning of the underlying mechanisms in the electrical activity of an axon using the biophysical properties of the squid giant axon.

  1. Noradrenergic control of gene expression and long-term neuronal adaptation evoked by learned vocalizations in songbirds.

    Directory of Open Access Journals (Sweden)

    Tarciso A F Velho

    Full Text Available Norepinephrine (NE is thought to play important roles in the consolidation and retrieval of long-term memories, but its role in the processing and memorization of complex acoustic signals used for vocal communication has yet to be determined. We have used a combination of gene expression analysis, electrophysiological recordings and pharmacological manipulations in zebra finches to examine the role of noradrenergic transmission in the brain's response to birdsong, a learned vocal behavior that shares important features with human speech. We show that noradrenergic transmission is required for both the expression of activity-dependent genes and the long-term maintenance of stimulus-specific electrophysiological adaptation that are induced in central auditory neurons by stimulation with birdsong. Specifically, we show that the caudomedial nidopallium (NCM, an area directly involved in the auditory processing and memorization of birdsong, receives strong noradrenergic innervation. Song-responsive neurons in this area express α-adrenergic receptors and are in close proximity to noradrenergic terminals. We further show that local α-adrenergic antagonism interferes with song-induced gene expression, without affecting spontaneous or evoked electrophysiological activity, thus dissociating the molecular and electrophysiological responses to song. Moreover, α-adrenergic antagonism disrupts the maintenance but not the acquisition of the adapted physiological state. We suggest that the noradrenergic system regulates long-term changes in song-responsive neurons by modulating the gene expression response that is associated with the electrophysiological activation triggered by song. We also suggest that this mechanism may be an important contributor to long-term auditory memories of learned vocalizations.

  2. Hyperlipidemic diet causes loss of olfactory sensory neurons, reduces olfactory discrimination, and disrupts odor-reversal learning.

    Science.gov (United States)

    Thiebaud, Nicolas; Johnson, Melissa C; Butler, Jessica L; Bell, Genevieve A; Ferguson, Kassandra L; Fadool, Andrew R; Fadool, James C; Gale, Alana M; Gale, David S; Fadool, Debra A

    2014-05-14

    Currently, 65% of Americans are overweight, which leads to well-supported cardiovascular and cognitive declines. Little, however, is known concerning obesity's impact on sensory systems. Because olfaction is linked with ingestive behavior to guide food choice, its potential dysfunction during obesity could evoke a positive feedback loop to perpetuate poor ingestive behaviors. To determine the effect of chronic energy imbalance and reveal any structural or functional changes associated with obesity, we induced long-term, diet-induced obesity by challenging mice to high-fat diets: (1) in an obesity-prone (C57BL/6J) and obesity-resistant (Kv1.3(-/-)) line of mice, and compared this with (2) late-onset, genetic-induced obesity in MC4R(-/-) mice in which diabetes secondarily precipitates after disruption of the hypothalamic axis. We report marked loss of olfactory sensory neurons and their axonal projections after exposure to a fatty diet, with a concomitant reduction in electro-olfactogram amplitude. Loss of olfactory neurons and associated circuitry is linked to changes in neuronal proliferation and normal apoptotic cycles. Using a computer-controlled, liquid-based olfactometer, mice maintained on fatty diets learn reward-reinforced behaviors more slowly, have deficits in reversal learning demonstrating behavioral inflexibility, and exhibit reduced olfactory discrimination. When obese mice are removed from their high-fat diet to regain normal body weight and fasting glucose, olfactory dysfunctions are retained. We conclude that chronic energy imbalance therefore presents long-lasting structural and functional changes in the operation of the sensory system designed to encode external and internal chemical information and leads to altered olfactory- and reward-driven behaviors.

  3. The Stressed Female Brain: Neuronal activity in the prelimbic but not infralimbic region of the medial prefrontal cortex suppresses learning after acute stress

    Directory of Open Access Journals (Sweden)

    Lisa Y. Maeng

    2013-12-01

    Full Text Available Women are nearly twice as likely as men to suffer from anxiety and post-traumatic stress disorder (PTSD, indicating that many females are especially vulnerable to stressful life experience. A profound sex difference in the response to stress is also observed in laboratory animals. Acute exposure to an uncontrollable stressful event disrupts associative learning during classical eyeblink conditioning in female rats but enhances this same type of learning process in males. These sex differences in response to stress are dependent on neuronal activity in similar but also different brain regions. Neuronal activity in the basolateral nucleus of the amygdala (BLA is necessary in both males and females. However, neuronal activity in the medial prefrontal cortex (mPFC during the stressor is necessary to modify learning in females but not in males. The mPFC is often divided into its prelimbic (PL and infralimbic (IL subregions, which differ both in structure and function. Through its connections to the BLA, we hypothesized that neuronal activity within the PL, but not IL, during the stressor is necessary to suppress learning in females. To test this hypothesis, either the PL or IL of adult female rats was bilaterally inactivated with GABAA agonist muscimol during acute inescapable swim stress. 24h later, all subjects were trained with classical eyeblink conditioning. Though stressed, females without neuronal activity in the PL learned well. In contrast, females with IL inactivation during the stressor did not learn well, behaving similar to stressed vehicle-treated females. These data suggest that exposure to a stressful event critically engages the PL, but not IL, to disrupt associative learning in females. Together with previous studies, these data indicate that the PL communicates with the BLA to suppress learning after a stressful experience in females. This circuit may be similarly engaged in women who become cognitively impaired after stressful

  4. The stressed female brain: neuronal activity in the prelimbic but not infralimbic region of the medial prefrontal cortex suppresses learning after acute stress.

    Science.gov (United States)

    Maeng, Lisa Y; Shors, Tracey J

    2013-01-01

    Women are nearly twice as likely as men to suffer from anxiety and post-traumatic stress disorder (PTSD), indicating that many females are especially vulnerable to stressful life experience. A profound sex difference in the response to stress is also observed in laboratory animals. Acute exposure to an uncontrollable stressful event disrupts associative learning during classical eyeblink conditioning in female rats but enhances this same type of learning process in males. These sex differences in response to stress are dependent on neuronal activity in similar but also different brain regions. Neuronal activity in the basolateral nucleus of the amygdala (BLA) is necessary in both males and females. However, neuronal activity in the medial prefrontal cortex (mPFC) during the stressor is necessary to modify learning in females but not in males. The mPFC is often divided into its prelimbic (PL) and infralimbic (IL) subregions, which differ both in structure and function. Through its connections to the BLA, we hypothesized that neuronal activity within the PL, but not IL, during the stressor is necessary to suppress learning in females. To test this hypothesis, either the PL or IL of adult female rats was bilaterally inactivated with GABAA agonist muscimol during acute inescapable swim stress. About 24 h later, all subjects were trained with classical eyeblink conditioning. Though stressed, females without neuronal activity in the PL learned well. In contrast, females with IL inactivation during the stressor did not learn well, behaving similarly to stressed vehicle-treated females. These data suggest that exposure to a stressful event critically engages the PL, but not IL, to disrupt associative learning in females. Together with previous studies, these data indicate that the PL communicates with the BLA to suppress learning after a stressful experience in females. This circuit may be similarly engaged in women who become cognitively impaired after stressful life

  5. Roles of octopaminergic and dopaminergic neurons in mediating reward and punishment signals in insect visual learning.

    Science.gov (United States)

    Unoki, Sae; Matsumoto, Yukihisa; Mizunami, Makoto

    2006-10-01

    Insects, like vertebrates, have considerable ability to associate visual, olfactory or other sensory signals with reward or punishment. Previous studies in crickets, honey bees and fruit-flies have suggested that octopamine (OA, invertebrate counterpart of noradrenaline) and dopamine (DA) mediate various kinds of reward and punishment signals in olfactory learning. However, whether the roles of OA and DA in mediating positive and negative reinforcing signals can be generalized to learning of sensory signals other than odors remained unknown. Here we first established a visual learning paradigm in which to associate a visual pattern with water reward or saline punishment for crickets and found that memory after aversive conditioning decayed much faster than that after appetitive conditioning. Then, we pharmacologically studied the roles of OA and DA in appetitive and aversive forms of visual learning. Crickets injected with epinastine or mianserin, OA receptor antagonists, into the hemolymph exhibited a complete impairment of appetitive learning to associate a visual pattern with water reward, but aversive learning with saline punishment was unaffected. By contrast, fluphenazine, chlorpromazine or spiperone, DA receptor antagonists, completely impaired aversive learning without affecting appetitive learning. The results demonstrate that OA and DA participate in reward and punishment conditioning in visual learning. This finding, together with results of previous studies on the roles of OA and DA in olfactory learning, suggests ubiquitous roles of the octopaminergic reward system and dopaminergic punishment system in insect learning.

  6. Age-dependent loss of cholinergic neurons in learning and memory-related brain regions and impaired learning in SAMP8 mice with trigeminal nerve damage

    Institute of Scientific and Technical Information of China (English)

    Yifan He; Jihong Zhu; Fang Huang; Liu Qin; Wenguo Fan; Hongwen He

    2014-01-01

    The tooth belongs to the trigeminal sensory pathway. Dental damage has been associated with impairments in the central nervous system that may be mediated by injury to the trigeminal nerve. In the present study, we investigated the effects of damage to the inferior alveolar nerve, an important peripheral nerve in the trigeminal sensory pathway, on learning and memory be-haviors and structural changes in related brain regions, in a mouse model of Alzheimer’s disease. Inferior alveolar nerve transection or sham surgery was performed in middle-aged (4-month-old) or elderly (7-month-old) senescence-accelerated mouse prone 8 (SAMP8) mice. When the middle-aged mice reached 8 months (middle-aged group 1) or 11 months (middle-aged group 2), and the elderly group reached 11 months, step-down passive avoidance and Y-maze tests of learn-ing and memory were performed, and the cholinergic system was examined in the hippocampus (Nissl staining and acetylcholinesterase histochemistry) and basal forebrain (choline acetyltrans-ferase immunohistochemistry). In the elderly group, animals that underwent nerve transection had fewer pyramidal neurons in the hippocampal CA1 and CA3 regions, fewer cholinergic ifbers in the CA1 and dentate gyrus, and fewer cholinergic neurons in the medial septal nucleus and vertical limb of the diagonal band, compared with sham-operated animals, as well as showing impairments in learning and memory. Conversely, no signiifcant differences in histology or be-havior were observed between middle-aged group 1 or group 2 transected mice and age-matched sham-operated mice. The present ifndings suggest that trigeminal nerve damage in old age, but not middle age, can induce degeneration of the septal-hippocampal cholinergic system and loss of hippocampal pyramidal neurons, and ultimately impair learning ability. Our results highlight the importance of active treatment of trigeminal nerve damage in elderly patients and those with Alzheimer’s disease, and

  7. Learning-related neuronal activation in the zebra finch song system nucleus HVC in response to the bird's own song.

    Directory of Open Access Journals (Sweden)

    Johan J Bolhuis

    Full Text Available Like many other songbird species, male zebra finches learn their song from a tutor early in life. Song learning in birds has strong parallels with speech acquisition in human infants at both the behavioral and neural levels. Forebrain nuclei in the 'song system' are important for the sensorimotor acquisition and production of song, while caudomedial pallial brain regions outside the song system are thought to contain the neural substrate of tutor song memory. Here, we exposed three groups of adult zebra finch males to either tutor song, to their own song, or to novel conspecific song. Expression of the immediate early gene protein product Zenk was measured in the song system nuclei HVC, robust nucleus of the arcopallium (RA and Area X. There were no significant differences in overall Zenk expression between the three groups. However, Zenk expression in the HVC was significantly positively correlated with the strength of song learning only in the group that was exposed to the bird's own song, not in the other two groups. These results suggest that the song system nucleus HVC may contain a neural representation of a memory of the bird's own song. Such a representation may be formed during juvenile song learning and guide the bird's vocal output.

  8. Neural correlates of olfactory learning paradigms in an identified neuron in the honeybee brain.

    Science.gov (United States)

    Mauelshagen, J

    1993-02-01

    1. Sensitization and classical odor conditioning of the proboscis extension reflex were functionally analyzed by repeated intracellular recordings from a single identified neuron (PE1-neuron) in the central bee brain. This neuron belongs to the class of "extrinsic cells" arising from the pedunculus of the mushroom bodies and has extensive arborizations in the median and lateral protocerebrum. The recordings were performed on isolated bee heads. 2. Two different series of physiological experiments were carried out with the use of a similar temporal succession of stimuli as in previous behavioral experiments. In the first series, one group of animals was used for a single conditioning trial [conditioned stimulus (CS), carnation; unconditioned stimulus (US), sucrose solution to the antennae and proboscis), a second group was used for sensitization (sensitizing stimulus, sucrose solution to the antennae and/or proboscis), and the third group served as control (no sucrose stimulation). In the second series, a differential conditioning paradigm (paired odor CS+, carnation; unpaired odor CS-, orange blossom) was applied to test the associative nature of the conditioning effect. 3. The PE1-neuron showed a characteristic burstlike odor response before the training procedures. The treatments resulted in different spike-frequency modulations of this response, which were specific for the nonassociative and associative stimulus paradigms applied. During differential conditioning, there are dynamic up and down modulations of spike frequencies and of the DC potentials underlying the responses to the CS+. Overall, only transient changes in the minute range were observed. 4. The results of the sensitization procedures suggest two qualitatively different US pathways. The comparison between sensitization and one-trial conditioning shows differential effects of nonassociative and associative stimulus paradigms on the response behavior of the PE1-neuron. The results of the differential

  9. The GABAergic Anterior Paired Lateral Neurons Facilitate Olfactory Reversal Learning in "Drosophila"

    Science.gov (United States)

    Wu, Yanying; Ren, Qingzhong; Li, Hao; Guo, Aike

    2012-01-01

    Reversal learning has been widely used to probe the implementation of cognitive flexibility in the brain. Previous studies in monkeys identified an essential role of the orbitofrontal cortex (OFC) in reversal learning. However, the underlying circuits and molecular mechanisms are poorly understood. Here, we use the T-maze to investigate the neural…

  10. Hebbian learning and predictive mirror neurons for actions, sensations and emotions

    NARCIS (Netherlands)

    Keysers, C.; Gazzola, Valeria

    2014-01-01

    Spike-timing-dependent plasticity is considered the neurophysiological basis of Hebbian learning and has been shown to be sensitive to both contingency and contiguity between pre- and postsynaptic activity. Here, we will examine how applying this Hebbian learning rule to a system of interconnected n

  11. From Neurons to Brainpower: Cognitive Neuroscience and Brain-Based Learning

    Science.gov (United States)

    Phillips, Janet M.

    2005-01-01

    We have learned more about the brain in the past five years than the previous 100. Neuroimaging, lesion studies, and animal studies have revealed the intricate inner workings of the brain and learning. Synaptogenesis, pruning, sensitive periods, and plasticity have all become accepted concepts of cognitive neuroscience that are now being applied…

  12. Hebbian learning and predictive mirror neurons for actions, sensations and emotions

    NARCIS (Netherlands)

    Keysers, Christian; Gazzola, Valeria

    2014-01-01

    Spike-timing-dependent plasticity is considered the neurophysiological basis of Hebbian learning and has been shown to be sensitive to both contingency and contiguity between pre- and postsynaptic activity. Here, we will examine how applying this Hebbian learning rule to a system of interconnected

  13. The GABAergic Anterior Paired Lateral Neurons Facilitate Olfactory Reversal Learning in "Drosophila"

    Science.gov (United States)

    Wu, Yanying; Ren, Qingzhong; Li, Hao; Guo, Aike

    2012-01-01

    Reversal learning has been widely used to probe the implementation of cognitive flexibility in the brain. Previous studies in monkeys identified an essential role of the orbitofrontal cortex (OFC) in reversal learning. However, the underlying circuits and molecular mechanisms are poorly understood. Here, we use the T-maze to investigate the neural…

  14. A bi-hemispheric neuronal network model of the cerebellum with spontaneous climbing fiber firing produces asymmetrical motor learning during robot control

    Directory of Open Access Journals (Sweden)

    Ruben Dario Pinzon Morales

    2014-11-01

    Full Text Available To acquire and maintain precise movement controls over a lifespan, changes in the physical and physiological characteristics of muscles must be compensated for adaptively. The cerebellum plays a crucial role in such adaptation. Changes in muscle characteristics are not always symmetrical. For example, it is unlikely that muscles that bend and straighten a joint will change to the same degree. Thus, different (i.e., asymmetrical adaptation is required for bending and straightening motions. To date, little is known about the role of the cerebellum in asymmetrical adaptation. Here, we investigate the cerebellar mechanisms required for asymmetrical adaptation using a bi-hemispheric cerebellar neuronal network model (biCNN. The bi-hemispheric structure is inspired by the observation that lesioning one hemisphere reduces motor performance asymmetrically. The biCNN model was constructed to run in real-time and used to control an unstable two-wheeled balancing robot. The load of the robot and its environment were modified to create asymmetrical perturbations. Plasticity at parallel fiber-Purkinje cell synapses in the biCNN model was driven by error signal in the climbing fiber (cf input. This cf input was configured to increase and decrease its firing rate from its spontaneous firing rate (approximately 1 Hz with sensory errors in the preferred and non-preferred direction of each hemisphere, as demonstrated in the monkey cerebellum. Our results showed that asymmetrical conditions were successfully handled by the biCNN model, in contrast to a single hemisphere model or a classical non-adaptive proportional and derivative controller. Further, the spontaneous activity of the cf, while relatively small, was critical for balancing the contribution of each cerebellar hemisphere to the overall motor command sent to the robot. Eliminating the spontaneous activity compromised the asymmetrical learning capabilities of the biCNN model. Thus, we conclude that a bi

  15. Aged neuronal nitric oxide knockout mice show preserved olfactory learning in both social recognition and odor-conditioning tasks

    Directory of Open Access Journals (Sweden)

    Bronwen M James

    2015-03-01

    Full Text Available There is evidence for both neurotoxic and neuroprotective roles of nitric oxide (NO in the brain and changes in the expression of the neuronal isoform of nitric oxide synthase (nNOS gene occur during aging. The current studies have investigated potential support for either a neurotoxic or neuroprotective role of NO derived from nNOS in the context of aging by comparing olfactory learning and locomotor function in young compared to old nNOS knockout (nNOS/- and wildtype control mice. Tasks involving social recognition and olfactory conditioning paradigms showed that old nNOS-/-animals had improved retention of learning compared to similar aged wildtype controls. Young nNOS-/- animals showed superior reversal learning to wildtypes in a conditioned learning task, although their performance was weakened with age. Interestingly, whereas young nNOS-/- animals were impaired in long term memory for social odors compared to wildtype controls, in old animals this pattern was reversed, possibly indicating beneficial compensatory changes influencing olfactory memory may occur during aging in nNOS-/- animals. Possibly such compensatory changes may have involved increased NO from other NOS isoforms since the memory deficit in young nNOS-/-animals could be rescued by the NO-donor, molsidomine. Both nNOS-/- and wildtype animals showed an age-associated decline in locomotor activity although young nNOS-/- animals were significantly more active than wildtypes, possibly due to an increased interest in novelty. Overall our findings suggest that lack of NO release via nNOS may protect animals to some extent against age-associated cognitive decline in memory tasks typically involving olfactory and hippocampal regions, but not against declines in reversal learning or locomotor activity.

  16. Learning intrinsic excitability in medium spiny neurons [v2; ref status: indexed, http://f1000r.es/30b

    Directory of Open Access Journals (Sweden)

    Gabriele Scheler

    2014-02-01

    Full Text Available We present an unsupervised, local activation-dependent learning rule for intrinsic plasticity (IP which affects the composition of ion channel conductances for single neurons in a use-dependent way. We use a single-compartment conductance-based model for medium spiny striatal neurons in order to show the effects of parameterization of individual ion channels on the neuronal membrane potential-curent relationship (activation function. We show that parameter changes within the physiological ranges are sufficient to create an ensemble of neurons with significantly different activation functions. We emphasize that the effects of intrinsic neuronal modulation on spiking behavior require a distributed mode of synaptic input and can be eliminated by strongly correlated input. We show how modulation and adaptivity in ion channel conductances can be utilized to store patterns without an additional contribution by synaptic plasticity (SP. The adaptation of the spike response may result in either "positive" or "negative" pattern learning. However, read-out of stored information depends on a distributed pattern of synaptic activity to let intrinsic modulation determine spike response. We briefly discuss the implications of this conditional memory on learning and addiction.

  17. In vivo Ca2+ imaging of mushroom body neurons during olfactory learning in the honey bee.

    Science.gov (United States)

    Haehnel, Melanie; Froese, Anja; Menzel, Randolf

    2009-08-18

    The in vivo and semi-in vivo preparation for calcium imaging has been developed in our lab by Joerges, Küttner and Galizia over ten years ago, to measure odor evoked activity in the antennal lobe. From then on, it has been continuously refined and applied to different neuropiles in the bee brain. Here, we describe the preparation currently used in the lab to measure activity in mushroom body neurons using a dextran coupled calcium-sensitive dye (Fura-2). We retrogradely stain mushroom body neurons by injecting dye into their axons or soma region. We focus on reducing the invasiveness, to achieve a preparation in which it is still possible to train the bee using PER conditioning. We are able to monitor and quantify the behavioral response by recording electro-myograms from the muscle which controls the PER (M17). After the physiological experiment the imaged structures are investigated in greater detail using confocal scanning microscopy to address the identity of the neurons.

  18. Environmental enrichment protects spatial learning and hippocampal neurons from the long-lasting effects of protein malnutrition early in life.

    Science.gov (United States)

    Soares, Roberto O; Horiquini-Barbosa, Everton; Almeida, Sebastião S; Lachat, João-José

    2017-09-29

    As early protein malnutrition has a critically long-lasting impact on the hippocampal formation and its role in learning and memory, and environmental enrichment has demonstrated great success in ameliorating functional deficits, here we ask whether exposure to an enriched environment could be employed to prevent spatial memory impairment and neuroanatomical changes in the hippocampus of adult rats maintained on a protein deficient diet during brain development (P0-P35). To elucidate the protective effects of environmental enrichment, we used the Morris water task and neuroanatomical analysis to determine whether changes in spatial memory and number and size of CA1 neurons differed significantly among groups. Protein malnutrition and environmental enrichment during brain development had significant effects on the spatial memory and hippocampal anatomy of adult rats. Malnourished but non-enriched rats (MN) required more time to find the hidden platform than well-nourished but non-enriched rats (WN). Malnourished but enriched rats (ME) performed better than the MN and similarly to the WN rats. There was no difference between well-nourished but non-enriched and enriched rats (WE). Anatomically, fewer CA1 neurons were found in the hippocampus of MN rats than in those of WN rats. However, it was also observed that ME and WN rats retained a similar number of neurons. These results suggest that environmental enrichment during brain development alters cognitive task performance and hippocampal neuroanatomy in a manner that is neuroprotective against malnutrition-induced brain injury. These results could have significant implications for malnourished infants expected to be at risk of disturbed brain development. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  20. Algorithms and parallel computing

    CERN Document Server

    Gebali, Fayez

    2011-01-01

    There is a software gap between the hardware potential and the performance that can be attained using today's software parallel program development tools. The tools need manual intervention by the programmer to parallelize the code. Programming a parallel computer requires closely studying the target algorithm or application, more so than in the traditional sequential programming we have all learned. The programmer must be aware of the communication and data dependencies of the algorithm or application. This book provides the techniques to explore the possible ways to

  1. Attenuated Response to Methamphetamine Sensitization and Deficits in Motor Learning and Memory after Selective Deletion of [beta]-Catenin in Dopamine Neurons

    Science.gov (United States)

    Diaz-Ruiz, Oscar; Zhang, YaJun; Shan, Lufei; Malik, Nasir; Hoffman, Alexander F.; Ladenheim, Bruce; Cadet, Jean Lud; Lupica, Carl R.; Tagliaferro, Adriana; Brusco, Alicia; Backman, Cristina M.

    2012-01-01

    In the present study, we analyzed mice with a targeted deletion of [beta]-catenin in DA neurons (DA-[beta]cat KO mice) to address the functional significance of this molecule in the shaping of synaptic responses associated with motor learning and following exposure to drugs of abuse. Relative to controls, DA-[beta]cat KO mice showed significant…

  2. Attenuated Response to Methamphetamine Sensitization and Deficits in Motor Learning and Memory after Selective Deletion of [beta]-Catenin in Dopamine Neurons

    Science.gov (United States)

    Diaz-Ruiz, Oscar; Zhang, YaJun; Shan, Lufei; Malik, Nasir; Hoffman, Alexander F.; Ladenheim, Bruce; Cadet, Jean Lud; Lupica, Carl R.; Tagliaferro, Adriana; Brusco, Alicia; Backman, Cristina M.

    2012-01-01

    In the present study, we analyzed mice with a targeted deletion of [beta]-catenin in DA neurons (DA-[beta]cat KO mice) to address the functional significance of this molecule in the shaping of synaptic responses associated with motor learning and following exposure to drugs of abuse. Relative to controls, DA-[beta]cat KO mice showed significant…

  3. Neuron class-specific requirements for Fragile X Mental Retardation Protein in critical period development of calcium signaling in learning and memory circuitry.

    Science.gov (United States)

    Doll, Caleb A; Broadie, Kendal

    2016-05-01

    Neural circuit optimization occurs through sensory activity-dependent mechanisms that refine synaptic connectivity and information processing during early-use developmental critical periods. Fragile X Mental Retardation Protein (FMRP), the gene product lost in Fragile X syndrome (FXS), acts as an activity sensor during critical period development, both as an RNA-binding translation regulator and channel-binding excitability regulator. Here, we employ a Drosophila FXS disease model to assay calcium signaling dynamics with a targeted transgenic GCaMP reporter during critical period development of the mushroom body (MB) learning/memory circuit. We find FMRP regulates depolarization-induced calcium signaling in a neuron-specific manner within this circuit, suppressing activity-dependent calcium transients in excitatory cholinergic MB input projection neurons and enhancing calcium signals in inhibitory GABAergic MB output neurons. Both changes are restricted to the developmental critical period and rectified at maturity. Importantly, conditional genetic (dfmr1) rescue of null mutants during the critical period corrects calcium signaling defects in both neuron classes, indicating a temporally restricted FMRP requirement. Likewise, conditional dfmr1 knockdown (RNAi) during the critical period replicates constitutive null mutant defects in both neuron classes, confirming cell-autonomous requirements for FMRP in developmental regulation of calcium signaling dynamics. Optogenetic stimulation during the critical period enhances depolarization-induced calcium signaling in both neuron classes, but this developmental change is eliminated in dfmr1 null mutants, indicating the activity-dependent regulation requires FMRP. These results show FMRP shapes neuron class-specific calcium signaling in excitatory vs. inhibitory neurons in developing learning/memory circuitry, and that FMRP mediates activity-dependent regulation of calcium signaling specifically during the early

  4. The processes of neuronal and recycling under the bias of implicit learning: literacy methods in focus

    OpenAIRE

    Guaresi, Ronei

    2011-01-01

    Based on advances in neuroscience and literature resulting from these advances, this text reflects on the acquisition of writing, specifically on methods of phonetic and global literacy, under the scope of implicit and explicit learning, in the acquisition of human language, essentially complex and arbitrary. This process will occur through the recuperation of the notion of connectionist learning and understanding of implicit and explicit. This theoretical recuperation arisen from discoveries...

  5. Parallelization of learning problems by artificial neural networks. Application in external radiotherapy; Parallelisation de problemes d'apprentissage par des reseaux neuronaux artificiels. Application en radiotherapie externe

    Energy Technology Data Exchange (ETDEWEB)

    Sauget, M

    2007-12-15

    This research is about the application of neural networks used in the external radiotherapy domain. The goal is to elaborate a new evaluating system for the radiation dose distributions in heterogeneous environments. The al objective of this work is to build a complete tool kit to evaluate the optimal treatment planning. My st research point is about the conception of an incremental learning algorithm. The interest of my work is to combine different optimizations specialized in the function interpolation and to propose a new algorithm allowing to change the neural network architecture during the learning phase. This algorithm allows to minimise the al size of the neural network while keeping a good accuracy. The second part of my research is to parallelize the previous incremental learning algorithm. The goal of that work is to increase the speed of the learning step as well as the size of the learned dataset needed in a clinical case. For that, our incremental learning algorithm presents an original data decomposition with overlapping, together with a fault tolerance mechanism. My last research point is about a fast and accurate algorithm computing the radiation dose deposit in any heterogeneous environment. At the present time, the existing solutions used are not optimal. The fast solution are not accurate and do not give an optimal treatment planning. On the other hand, the accurate solutions are far too slow to be used in a clinical context. Our algorithm answers to this problem by bringing rapidity and accuracy. The concept is to use a neural network adequately learned together with a mechanism taking into account the environment changes. The advantages of this algorithm is to avoid the use of a complex physical code while keeping a good accuracy and reasonable computation times. (author)

  6. Neurogenesis and the Spacing Effect: Learning over Time Enhances Memory and the Survival of New Neurons

    Science.gov (United States)

    Sisti, Helene M.; Glass, Arnold L.; Shors, Tracey J.

    2007-01-01

    Information that is spaced over time is better remembered than the same amount of information massed together. This phenomenon, known as the spacing effect, was explored with respect to its effect on learning and neurogenesis in the adult dentate gyrus of the hippocampal formation. Because the cells are generated over time and because learning…

  7. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition

    Science.gov (United States)

    Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert

    2015-01-01

    During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input. PMID:26284370

  8. A mouse model of visual perceptual learning reveals alterations in neuronal coding and dendritic spine density in the visual cortex

    Directory of Open Access Journals (Sweden)

    Yan eWang

    2016-03-01

    Full Text Available Visual perceptual learning (VPL can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS and a 55% gain in visual acuity (VA. Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1 than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.

  9. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition.

    Science.gov (United States)

    Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert

    2015-01-01

    During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.

  10. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition.

    Directory of Open Access Journals (Sweden)

    Johannes Bill

    Full Text Available During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.

  11. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  12. Unsupervised learning toward brain imaging data analysis: cigarette craving and resistance related neuronal activations from functional magnetic resonance imaging data analysis

    Science.gov (United States)

    Kim, Dong-Youl; Lee, Jong-Hwan

    2014-05-01

    A data-driven unsupervised learning such as an independent component analysis was gainfully applied to bloodoxygenation- level-dependent (BOLD) functional magnetic resonance imaging (fMRI) data compared to a model-based general linear model (GLM). This is due to an ability of this unsupervised learning method to extract a meaningful neuronal activity from BOLD signal that is a mixture of confounding non-neuronal artifacts such as head motions and physiological artifacts as well as neuronal signals. In this study, we support this claim by identifying neuronal underpinnings of cigarette craving and cigarette resistance. The fMRI data were acquired from heavy cigarette smokers (n = 14) while they alternatively watched images with and without cigarette smoking. During acquisition of two fMRI runs, they were asked to crave when they watched cigarette smoking images or to resist the urge to smoke. Data driven approaches of group independent component analysis (GICA) method based on temporal concatenation (TC) and TCGICA with an extension of iterative dual-regression (TC-GICA-iDR) were applied to the data. From the results, cigarette craving and cigarette resistance related neuronal activations were identified in the visual area and superior frontal areas, respectively with a greater statistical significance from the TC-GICA-iDR method than the TC-GICA method. On the other hand, the neuronal activity levels in many of these regions were not statistically different from the GLM method between the cigarette craving and cigarette resistance due to potentially aberrant BOLD signals.

  13. Parallel biocomputing

    Directory of Open Access Journals (Sweden)

    Witte John S

    2011-03-01

    Full Text Available Abstract Background With the advent of high throughput genomics and high-resolution imaging techniques, there is a growing necessity in biology and medicine for parallel computing, and with the low cost of computing, it is now cost-effective for even small labs or individuals to build their own personal computation cluster. Methods Here we briefly describe how to use commodity hardware to build a low-cost, high-performance compute cluster, and provide an in-depth example and sample code for parallel execution of R jobs using MOSIX, a mature extension of the Linux kernel for parallel computing. A similar process can be used with other cluster platform software. Results As a statistical genetics example, we use our cluster to run a simulated eQTL experiment. Because eQTL is computationally intensive, and is conceptually easy to parallelize, like many statistics/genetics applications, parallel execution with MOSIX gives a linear speedup in analysis time with little additional effort. Conclusions We have used MOSIX to run a wide variety of software programs in parallel with good results. The limitations and benefits of using MOSIX are discussed and compared to other platforms.

  14. All-memristive neuromorphic computing with level-tuned neurons.

    Science.gov (United States)

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-02

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  15. All-memristive neuromorphic computing with level-tuned neurons

    Science.gov (United States)

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-01

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  16. BDNF and Schizophrenia: from Neurodevelopment to Neuronal Plasticity, Learning and Memory.

    Directory of Open Access Journals (Sweden)

    Rodrigo eNieto

    2013-06-01

    Full Text Available Brain Derived Neurotrophic Factor (BDNF is a neurotrophin that has been related not only to neurodevelopment and neuroprotection, but also to synapse regulation, learning and memory. Research focused on the neurobiology of schizophrenia has emphasized the relevance of neurodevelompental and neurotoxicity-related elements in the pathogenesis of this disease. Research focused on the clinical features of schizophrenia in the past decades has emphasized the relevance of cognitive deficits of this illness, considered a core manifestation and an important predictor for functional outcome. Variations in neurotrophins such as BDNF may have a role as part of the molecular mechanisms underlying these processes, from the neurodevelopmental alterations to the molecular mechanisms of cognitive dysfunction in patients with schizophrenia.

  17. A VLSI array of low-power spiking neurons and bistable synapses with spike-timing dependent plasticity.

    Science.gov (United States)

    Indiveri, Giacomo; Chicca, Elisabetta; Douglas, Rodney

    2006-01-01

    We present a mixed-mode analog/digital VLSI device comprising an array of leaky integrate-and-fire (I&F) neurons, adaptive synapses with spike-timing dependent plasticity, and an asynchronous event based communication infrastructure that allows the user to (re)configure networks of spiking neurons with arbitrary topologies. The asynchronous communication protocol used by the silicon neurons to transmit spikes (events) off-chip and the silicon synapses to receive spikes from the outside is based on the "address-event representation" (AER). We describe the analog circuits designed to implement the silicon neurons and synapses and present experimental data showing the neuron's response properties and the synapses characteristics, in response to AER input spike trains. Our results indicate that these circuits can be used in massively parallel VLSI networks of I&F neurons to simulate real-time complex spike-based learning algorithms.

  18. Multi-task Coalition Parallel Formation Strategy Based on Reinforcement Learning%基于强化学习的多任务联盟并行形成策略

    Institute of Scientific and Technical Information of China (English)

    蒋建国; 苏兆品; 齐美彬; 张国富

    2008-01-01

    Agent coalition is an important manner of agents' coordination and cooperation. Forming a coalition, agents can enhance their ability to solve problems and obtain more utilities. In this paper, a novel multi-task coalition parallel formation strategy is presented, and the conclusion that the process of multi-task coalition formation is a Markov decision process is testified theoretically. Moreover, reinforcement learning is used to solve agents' behavior strategy, and the process of multi-task coalition parallel formation is described. In multi-task oriented domains, the strategy can effectively and parallel form multi-task coalitions.

  19. PARALLEL STABILIZATION

    Institute of Scientific and Technical Information of China (English)

    J.L.LIONS

    1999-01-01

    A new algorithm for the stabilization of (possibly turbulent, chaotic) distributed systems, governed by linear or non linear systems of equations is presented. The SPA (Stabilization Parallel Algorithm) is based on a systematic parallel decomposition of the problem (related to arbitrarily overlapping decomposition of domains) and on a penalty argument. SPA is presented here for the case of linear parabolic equations: with distrjbuted or boundary control. It extends to practically all linear and non linear evolution equations, as it will be presented in several other publications.

  20. Orthogonal topography in the parallel input architecture of songbird HVC.

    Science.gov (United States)

    Elliott, Kevin C; Wu, Wei; Bertram, Richard; Hyson, Richard L; Johnson, Frank

    2017-06-15

    Neural activity within the cortical premotor nucleus HVC (acronym is name) encodes the learned songs of adult male zebra finches (Taeniopygia guttata). HVC activity is driven and/or modulated by a group of five afferent nuclei (the Medial Magnocellular nucleus of the Anterior Nidopallium, MMAN; Nucleus Interface, NIf; nucleus Avalanche, Av; the Robust nucleus of the Arcopallium, RA; the Uvaeform nucleus, Uva). While earlier evidence suggested that HVC receives a uniformly distributed and nontopographic pattern of afferent input, recent evidence suggests this view is incorrect (Basista et al., ). Here, we used a double-labeling strategy (varying both the distance between and the axial orientation of dual tracer injections into HVC) to reveal a massively parallel and in some cases topographic pattern of afferent input. Afferent neurons target only one rostral or caudal location within medial or lateral HVC, and each HVC location receives convergent input from each afferent nucleus in parallel. Quantifying the distributions of single-labeled cells revealed an orthogonal topography in the organization of afferent input from MMAN and NIf, two cortical nuclei necessary for song learning. MMAN input is organized across the lateral-medial axis whereas NIf input is organized across the rostral-caudal axis. To the extent that HVC activity is influenced by afferent input during the learning, perception, or production of song, functional models of HVC activity may need revision to account for the parallel input architecture of HVC, along with the orthogonal input topography of MMAN and NIf. © 2017 Wiley Periodicals, Inc.

  1. Postnatal Loss of P/Q-type Channels Confined to Rhombic Lip Derived Neurons Alters Synaptic Transmission at the Parallel Fiber to Purkinje Cell Synapse and Replicates Genomic Cacna1a Mutation Phenotype of Ataxia and Seizures in Mice

    Science.gov (United States)

    Maejima, Takashi; Wollenweber, Patric; Teusner, Lena U. C.; Noebels, Jeffrey L.; Herlitze, Stefan; Mark, Melanie D.

    2013-01-01

    Ataxia, episodic dyskinesia and thalamocortical seizures are associated with an inherited loss of P/Q-type voltage-gated Ca2+ channel function. P/Q-type channels are widely expressed throughout the neuraxis, obscuring identification of the critical networks underlying these complex neurological disorders. We recently showed that the conditional postnatal loss of P/Q-type channels in cerebellar Purkinje cells (PCs) in mice (purky) leads to these aberrant phenotypes, suggesting that intrinsic alteration in PC output is a sufficient pathogenic factor for disease initiation. The question arises whether P/Q-type channel deletion confined to a single upstream cerebellar synapse might induce the pathophysiological abnormality of genomically inherited P/Q-type channel disorders. PCs integrate two excitatory inputs, climbing fibers from inferior olive and parallel fibers (PFs) from granule cells (GCs) that receive mossy fiber (MF) input derived from precerebellar nuclei. In this paper, we introduce a new mouse model with a selective knock-out of P/Q-type channels in rhombic lip derived neurons including PF- and MF-pathways (quirky). We found that in quirky mice, PF-PC synaptic transmission is reduced during low-frequency stimulation. Using focal light stimulation of GCs that express optogenetic light-sensitive channels, channelrhodopsin-2, we found that modulation of PC firing via GC input is reduced in quirky mice. Phenotypic analysis revealed that quirky mice display ataxia, dyskinesia and absence epilepsy. These results suggest that developmental alteration of patterned input confined to only one of the main afferent cerebellar excitatory synaptic pathways has a significant role in generating the neurological phenotype associated with the global genomic loss of P/Q-type channel function. PMID:23516282

  2. Parallel Worlds

    DEFF Research Database (Denmark)

    Steno, Anne Mia

    2013-01-01

    as a symbol of something else, for instance as a way of handling uncertainty in difficult times, magical practice should also be seen as an emic concept. In this context, understanding the existence of two parallel universes, the profane and the magic, is important because the witches’ movements across...

  3. Ectopic neurons in the hippocampus may be a cause of learning disability after prenatal exposure to X-rays in rats.

    Science.gov (United States)

    Takai, Nobuhiko; Sun, Xue-Zhi; Ando, Koichi; Mishima, Kenichi; Takahashi, Sentaro

    2004-12-01

    The relationship between an impairment of spatial navigation and an incidence of ectopic neurons in the dorsal hippocampus was investigated in adult rats that were prenatally exposed to X-ray irradiation. Adult rats which had received 1.5 Gy X-rays at embryonic day 15 (E15) showed significant learning disability in the water-maze task. According to the mean value of the swimming time, we categorized the irradiated adult rats into the following three groups: slightly damaged group, mildly damaged group and severely damaged group. No significant difference in the brain weight was found between the three categorized groups. Ectopic neurons appearing at abnormal places were prominently observed in the dorsal hippocampus of the severely damaged group with a remarkable learning disturbance, while no ectopia in the hippocampus was observed in the slightly damaged group. This may suggest that the cognitive dysfunction induced by prenatal exposure to X-ray irradiation may be, at least in part, attributable to ectopic neurons of the hippocampus.

  4. Glucocorticoids increase impairments in learning and memory due to elevated amyloid precursor protein expression and neuronal apoptosis in 12-month old mice.

    Science.gov (United States)

    Li, Wei-Zu; Li, Wei-Ping; Yao, Yu-You; Zhang, Wen; Yin, Yan-Yan; Wu, Guo-Cui; Gong, Hui-Ling

    2010-02-25

    Alzheimer's disease is a chronic neurodegenerative disorder marked by a progressive loss of memory and cognitive function. Stress level glucocorticoids are correlated with dementia progression in patients with Alzheimer's disease. In this study, twelve month old male mice were chronically treated for 21 days with stress-level dexamethasone (5mg/kg). We investigated the pathological consequences of dexamethasone administration on learning and memory impairments, amyloid precursor protein processing and neuronal cell apoptosis in 12-month old male mice. Our results indicate that dexamethasone can induce learning and memory impairments, neuronal cell apoptosis, and mRNA levels of the amyloid precursor protein, beta-secretase and caspase-3 are selectively increased after dexamethasone administration. Immunohistochemistry demonstrated that amyloid precursor protein, caspase-3 and cytochrome c in the cortex and CA1, CA3 regions of the hippocampus are significantly increased in 12-month old male mice. Furthermore, dexamethasone treatment induced cortex and hippocampus neuron apoptosis as well as increasing the activity of caspase-9 and caspase-3. These findings suggest that high levels of glucocorticoids, found in Alzheimer's disease, are not merely a consequence of the disease process but rather play a central role in the development and progression of Alzheimer's disease. Stress management or pharmacological reduction of glucocorticoids warrant additional consideration of the regimen used in Alzheimer's disease therapies.

  5. Cajal bodies in neurons.

    Science.gov (United States)

    Lafarga, Miguel; Tapia, Olga; Romero, Ana M; Berciano, Maria T

    2016-09-14

    Cajal is commonly regarded as the father of modern neuroscience in recognition of his fundamental work on the structure of the nervous system. But Cajal also made seminal contributions to the knowledge of nuclear structure in the early 1900s, including the discovery of the "accessory body" later renamed "Cajal body" (CB). This important nuclear structure has emerged as a center for the assembly of ribonucleoproteins (RNPs) required for splicing, ribosome biogenesis and telomere maintenance. The modern era of CB research started in the 1990s with the discovery of coilin, now known as a scaffold protein of CBs, and specific probes for small nuclear RNAs (snRNAs). In this review, we summarize what we have learned in the recent decades concerning CBs in post-mitotic neurons, thereby ruling out dynamic changes in CB functions during the cell cycle. We show that CBs are particularly prominent in neurons, where they frequently associate with the nucleolus. Neuronal CBs are transcription-dependent nuclear organelles. Indeed, their number dynamically accommodates to support the high neuronal demand for splicing and ribosome biogenesis required for sustaining metabolic and bioelectrical activity. Mature neurons have canonical CBs enriched in coilin, survival motor neuron protein and snRNPs. Disruption and loss of neuronal CBs associate with severe neuronal dysfunctions in several neurological disorders such as motor neuron diseases. In particular, CB depletion in motor neurons seems to reflect a perturbation of transcription and splicing in spinal muscular atrophy, the most common genetic cause of infant mortality.

  6. Teaching and learning the Hodgkin-Huxley model based on software developed in NEURON's programming language hoc

    National Research Council Canada - National Science Library

    Hernández, Oscar E; Zurek, Eduardo E

    2013-01-01

    .... The aim of this work is to develop a didactic and easy-to-use computational tool in the NEURON simulation environment, which allows graphical visualization of both the passive and active conduction...

  7. Effectiveness of telemedicine and distance learning applications for patients with chronic heart failure. A protocol for prospective parallel group non-randomised open label study.

    Science.gov (United States)

    Vanagas, Giedrius; Umbrasiene, Jelena; Slapikas, Rimvydas

    2012-01-01

    Chronic heart failure in Baltic Sea Region is responsible for more hospitalisations than all forms of cancer combined and is one of the leading causes of hospitalisations in elderly patients. Frequent hospitalisations, along with other direct and indirect costs, place financial burden on healthcare systems. We aim to test the hypothesis that telemedicine and distance learning applications is superior to the current standard of home care. Prospective parallel group non-randomised open label study in patients with New York Heart Association (NYHA) II-III chronic heart failure will be carried out in six Baltic Sea Region countries. The study is organised into two 6-month follow-up periods. The first 6-month period is based on active implementation of tele-education and/or telemedicine for patients in two groups (active run period) and one standard care group (passive run period). The second 6-month period of observation will be based on standard care model (passive run period) to all three groups. Our proposed practice change is based on translational research with empirically supported interventions brought to practice and aims to find the home care model that is most effective to patient needs. This study has been approved by National Bioethics Committee (2011-03-07; Registration No: BE-2-11). This study has been registered in Australian New Zealand Clinical Trials Registry (ANZCTR) with registration number ACTRN12611000834954.

  8. 基于机器学习的并行文件系统性能预测%Predicting the Parallel File System Performance via Machine Learning

    Institute of Scientific and Technical Information of China (English)

    赵铁柱; 董守斌; Verdi March; Simon See

    2011-01-01

    并行文件系统能有效解决高性能计算系统的海量数据存储和I/O瓶颈问题.由于影响系统性能的因素十分复杂,如何有效地评估系统性能并对性能进行预测成为一个潜在的挑战和热点.以并行文件系统的性能评估和预测作为研究目标,在研究文件系统的架构和性能因子后,设计了一个基于机器学习的并行文件系统预测模型,运用特征选择算法对性能因子数量进行约简,挖掘出系统性能和影响因子之间的特定的关系进行性能预测.通过设计大量实验用例,对特定的Lustre文件系统进行性能评估和预测.评估和实验结果表明:threads/OST、对象存储器(OSS)的数量、磁盘数目和RAID的组织方式是4个调整系统性能最重要因子,预测结果的平均相对误差能控制在25.1%~32.1%之间,具有较好预准确度.%Parallel file system can effectively solve the problems of massive data storage and I/O bottleneck. Because the potential impact on the system is not clearly understood, how to evaluate and predict performance of parallel file system becomes the potential challenge and hotspot. In this work,we aim to research the performance evaluation and prediction of parallel file system. After studying the architecture and performance factors of such file system, we design a predictive mode of parallel file system based on machine learning approaches. We use feature selection algorithms to reduce the number of performance factors to be tested in validating the performance. We also mine the particular relationship of system performance and impact factors to predict the performance of a specific file system. We validate and predict the performance of a specific Lustre file system through a series of experiment cases. Our evaluation and experiment results indicate that threads/OST, num of OSSs (Object Storage Server), hum of disks and num and type of RAID are the four most important parameters to tune the performance

  9. Machine learning on-a-chip: a high-performance low-power reusable neuron architecture for artificial neural networks in ECG classifications.

    Science.gov (United States)

    Sun, Yuwen; Cheng, Allen C

    2012-07-01

    Artificial neural networks (ANNs) are a promising machine learning technique in classifying non-linear electrocardiogram (ECG) signals and recognizing abnormal patterns suggesting risks of cardiovascular diseases (CVDs). In this paper, we propose a new reusable neuron architecture (RNA) enabling a performance-efficient and cost-effective silicon implementation for ANN. The RNA architecture consists of a single layer of physical RNA neurons, each of which is designed to use minimal hardware resource (e.g., a single 2-input multiplier-accumulator is used to compute the dot product of two vectors). By carefully applying the principal of time sharing, RNA can multiplexs this single layer of physical neurons to efficiently execute both feed-forward and back-propagation computations of an ANN while conserving the area and reducing the power dissipation of the silicon. A three-layer 51-30-12 ANN is implemented in RNA to perform the ECG classification for CVD detection. This RNA hardware also allows on-chip automatic training update. A quantitative design space exploration in area, power dissipation, and execution speed between RNA and three other implementations representative of different reusable hardware strategies is presented and discussed. Compared with an equivalent software implementation in C executed on an embedded microprocessor, the RNA ASIC achieves three orders of magnitude improvements in both the execution speed and the energy efficiency.

  10. Mesmerising mirror neurons.

    Science.gov (United States)

    Heyes, Cecilia

    2010-06-01

    Mirror neurons have been hailed as the key to understanding social cognition. I argue that three currents of thought-relating to evolution, atomism and telepathy-have magnified the perceived importance of mirror neurons. When they are understood to be a product of associative learning, rather than an adaptation for social cognition, mirror neurons are no longer mesmerising, but they continue to raise important questions about both the psychology of science and the neural bases of social cognition. Copyright 2010 Elsevier Inc. All rights reserved.

  11. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  12. Distinct neuronal populations in the basal forebrain encode motivational salience and movement

    Directory of Open Access Journals (Sweden)

    Irene eAvila

    2014-12-01

    Full Text Available Basal forebrain (BF is one of the largest cortically-projecting neuromodulatory systems in the mammalian brain, and plays a key role in attention, arousal, learning and memory. The cortically projecting BF neurons, comprised of mainly magnocellular cholinergic and GABAergic neurons, are widely distributed across several brain regions that spatially overlap with the ventral striatopallidal system at the ventral pallidum (VP. As a first step toward untangling the respective functions of spatially overlapping BF and VP systems, the goal of this study was to comprehensively characterize the behavioral correlates and physiological properties of heterogeneous neuronal populations in the BF region. We found that, while rats performed a reward-biased simple reaction time task, distinct neuronal populations encode either motivational salience or movement information. The motivational salience of attended stimuli is encoded by phasic bursting activity of a large population of slow-firing neurons that have large, broad, and complex action potential waveforms. In contrast, two other separate groups of neurons encode movement-related information, and respectively increase and decrease firing rates while rats maintained fixation. These two groups of neurons mostly have higher firing rates and small, narrow action potential waveforms. These results support the conclusion that multiple neurophysiologically distinct neuronal populations in the BF region operate independently of each other as parallel functional circuits. These observations also caution against interpreting neuronal activity in this region as a homogeneous population reflecting the function of either BF or VP alone. We suggest that salience- and movement-related neuronal populations likely correspond to BF corticopetal neurons and VP neurons, respectively.

  13. Chronic restraint stress promotes learning and memory impairment due to enhanced neuronal endoplasmic reticulum stress in the frontal cortex and hippocampus in male mice.

    Science.gov (United States)

    Huang, Rong-Rong; Hu, Wen; Yin, Yan-Yan; Wang, Yu-Chan; Li, Wei-Ping; Li, Wei-Zu

    2015-02-01

    Chronic stress has been implicated in many types of neurodegenerative diseases, such as Alzheimer's disease (AD). In our previous study, we demonstrated that chronic restraint stress (CRS) induced reactive oxygen species (ROS) overproduction and oxidative damage in the frontal cortex and hippocampus in mice. In the present study, we investigated the effects of CRS (over a period of 8 weeks) on learning and memory impairment and endoplasmic reticulum (ER) stress in the frontal cortex and hippocampus in male mice. The Morris water maze was used to investigate the effects of CRS on learning and memory impairment. Immunohistochemistry and immunoblot analysis were also used to determine the expression levels of protein kinase C α (PKCα), 78 kDa glucose-regulated protein (GRP78), C/EBP-homologous protein (CHOP) and mesencephalic astrocyte-derived neurotrophic factor (MANF). The results revealed that CRS significantly accelerated learning and memory impairment, and induced neuronal damage in the frontal cortex and hippocampus CA1 region. Moreover, CRS significantly increased the expression of PKCα, CHOP and MANF, and decreased that of GRP78 in the frontal cortex and hippocampus. Our data suggest that exposure to CRS (for 8 weeks) significantly accelerates learning and memory impairment, and the mechanisms involved may be related to ER stress in the frontal cortex and hippocampus.

  14. 基于人工神经网络的并行强化学习自适应路径规划%Application of Parallel Reinforcement Learning Based on Artificial Neural Network to Adaptive Path Planning

    Institute of Scientific and Technical Information of China (English)

    耿晓龙; 李长江

    2011-01-01

    强化学习是通过对环境的反复试探建立起从环境状态到行为动作的映射.利用人工神经网络的反馈进行权值的调整,再与高学习效率的并行强化学习算法相结合,提出了基于人工神经网络的并行强化学习的应用方法,并通过实验仿真验证了迭代过程的收敛性和该方法的可行性,从而有效地完成了路径学习.%Reinforcement learning is an important class of learning techniques that learns to perform a certain task through trial and error interactions with an knowledge-poor environment.By combining artificial neural network with parallel reinforcement learning, an applicable method of parallel reinforcement learning algorithm based on artificial neural network is proposed.Experimental results show that the method is effective.

  15. Spiking neurons in a hierarchical self-organizing map model can learn to develop spatial and temporal properties of entorhinal grid cells and hippocampal place cells.

    Directory of Open Access Journals (Sweden)

    Praveen K Pilly

    Full Text Available Medial entorhinal grid cells and hippocampal place cells provide neural correlates of spatial representation in the brain. A place cell typically fires whenever an animal is present in one or more spatial regions, or places, of an environment. A grid cell typically fires in multiple spatial regions that form a regular hexagonal grid structure extending throughout the environment. Different grid and place cells prefer spatially offset regions, with their firing fields increasing in size along the dorsoventral axes of the medial entorhinal cortex and hippocampus. The spacing between neighboring fields for a grid cell also increases along the dorsoventral axis. This article presents a neural model whose spiking neurons operate in a hierarchy of self-organizing maps, each obeying the same laws. This spiking GridPlaceMap model simulates how grid cells and place cells may develop. It responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with one or more firing fields that match neurophysiological data about these cells and their development in juvenile rats. The place cells represent much larger spaces than the grid cells, which enable them to support navigational behaviors. Both self-organizing maps amplify and learn to categorize the most frequent and energetic co-occurrences of their inputs. The current results build upon a previous rate-based model of grid and place cell learning, and thus illustrate a general method for converting rate-based adaptive neural models, without the loss of any of their analog properties, into models whose cells obey spiking dynamics. New properties of the spiking GridPlaceMap model include the appearance of theta band modulation. The spiking model also opens a path for implementation in brain-emulating nanochips comprised of networks of noisy spiking neurons with multiple-level adaptive weights for controlling autonomous

  16. BAF53b, a Neuron-Specific Nucleosome Remodeling Factor, Is Induced after Learning and Facilitates Long-Term Memory Consolidation.

    Science.gov (United States)

    Yoo, Miran; Choi, Kwang-Yeon; Kim, Jieun; Kim, Mujun; Shim, Jaehoon; Choi, Jun-Hyeok; Cho, Hye-Yeon; Oh, Jung-Pyo; Kim, Hyung-Su; Kaang, Bong-Kiun; Han, Jin-Hee

    2017-03-29

    Although epigenetic mechanisms of gene expression regulation have recently been implicated in memory consolidation and persistence, the role of nucleosome-remodeling is largely unexplored. Recent studies show that the functional loss of BAF53b, a postmitotic neuron-specific subunit of the BAF nucleosome-remodeling complex, results in the deficit of consolidation of hippocampus-dependent memory and cocaine-associated memory in the rodent brain. However, it is unclear whether BAF53b expression is regulated during memory formation and how BAF53b regulates fear memory in the amygdala, a key brain site for fear memory encoding and storage. To address these questions, we used viral vector approaches to either decrease or increase BAF53b function specifically in the lateral amygdala of adult mice in auditory fear conditioning paradigm. Knockdown of Baf53b before training disrupted long-term memory formation with no effect on short-term memory, basal synaptic transmission, and spine structures. We observed in our qPCR analysis that BAF53b was induced in the lateral amygdala neurons at the late consolidation phase after fear conditioning. Moreover, transient BAF53b overexpression led to persistently enhanced memory formation, which was accompanied by increase in thin-type spine density. Together, our results provide the evidence that BAF53b is induced after learning, and show that such increase of BAF53b level facilitates memory consolidation likely by regulating learning-related spine structural plasticity.SIGNIFICANCE STATEMENT Recent works in the rodent brain begin to link nucleosome remodeling-dependent epigenetic mechanism to memory consolidation. Here we show that BAF53b, an epigenetic factor involved in nucleosome remodeling, is induced in the lateral amygdala neurons at the late phase of consolidation after fear conditioning. Using specific gene knockdown or overexpression approaches, we identify the critical role of BAF53b in the lateral amygdala neurons for memory

  17. Neuronal boost to evolutionary dynamics.

    Science.gov (United States)

    de Vladar, Harold P; Szathmáry, Eörs

    2015-12-06

    Standard evolutionary dynamics is limited by the constraints of the genetic system. A central message of evolutionary neurodynamics is that evolutionary dynamics in the brain can happen in a neuronal niche in real time, despite the fact that neurons do not reproduce. We show that Hebbian learning and structural synaptic plasticity broaden the capacity for informational replication and guided variability provided a neuronally plausible mechanism of replication is in place. The synergy between learning and selection is more efficient than the equivalent search by mutation selection. We also consider asymmetric landscapes and show that the learning weights become correlated with the fitness gradient. That is, the neuronal complexes learn the local properties of the fitness landscape, resulting in the generation of variability directed towards the direction of fitness increase, as if mutations in a genetic pool were drawn such that they would increase reproductive success. Evolution might thus be more efficient within evolved brains than among organisms out in the wild.

  18. Neuronal boost to evolutionary dynamics

    Science.gov (United States)

    de Vladar, Harold P.; Szathmáry, Eörs

    2015-01-01

    Standard evolutionary dynamics is limited by the constraints of the genetic system. A central message of evolutionary neurodynamics is that evolutionary dynamics in the brain can happen in a neuronal niche in real time, despite the fact that neurons do not reproduce. We show that Hebbian learning and structural synaptic plasticity broaden the capacity for informational replication and guided variability provided a neuronally plausible mechanism of replication is in place. The synergy between learning and selection is more efficient than the equivalent search by mutation selection. We also consider asymmetric landscapes and show that the learning weights become correlated with the fitness gradient. That is, the neuronal complexes learn the local properties of the fitness landscape, resulting in the generation of variability directed towards the direction of fitness increase, as if mutations in a genetic pool were drawn such that they would increase reproductive success. Evolution might thus be more efficient within evolved brains than among organisms out in the wild. PMID:26640653

  19. Study on Parallel Computing

    Institute of Scientific and Technical Information of China (English)

    Guo-Liang Chen; Guang-Zhong Sun; Yun-Quan Zhang; Ze-Yao Mo

    2006-01-01

    In this paper, we present a general survey on parallel computing. The main contents include parallel computer system which is the hardware platform of parallel computing, parallel algorithm which is the theoretical base of parallel computing, parallel programming which is the software support of parallel computing. After that, we also introduce some parallel applications and enabling technologies. We argue that parallel computing research should form an integrated methodology of "architecture - algorithm - programming - application". Only in this way, parallel computing research becomes continuous development and more realistic.

  20. Prospective Coding by Spiking Neurons.

    Directory of Open Access Journals (Sweden)

    Johanni Brea

    2016-06-01

    Full Text Available Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron's firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ.

  1. Dopaminergic neurons write and update memories with cell-type-specific rules.

    Science.gov (United States)

    Aso, Yoshinori; Rubin, Gerald M

    2016-07-21

    Associative learning is thought to involve parallel and distributed mechanisms of memory formation and storage. In Drosophila, the mushroom body (MB) is the major site of associative odor memory formation. Previously we described the anatomy of the adult MB and defined 20 types of dopaminergic neurons (DANs) that each innervate distinct MB compartments (Aso et al., 2014a, 2014b). Here we compare the properties of memories formed by optogenetic activation of individual DAN cell types. We found extensive differences in training requirements for memory formation, decay dynamics, storage capacity and flexibility to learn new associations. Even a single DAN cell type can either write or reduce an aversive memory, or write an appetitive memory, depending on when it is activated relative to odor delivery. Our results show that different learning rules are executed in seemingly parallel memory systems, providing multiple distinct circuit-based strategies to predict future events from past experiences.

  2. A Calcium- and Diacylglycerol-Stimulated Protein Kinase C (PKC), Caenorhabditis elegans PKC-2, Links Thermal Signals to Learned Behavior by Acting in Sensory Neurons and Intestinal Cells.

    Science.gov (United States)

    Land, Marianne; Rubin, Charles S

    2017-10-01

    Ca(2+)- and diacylglycerol (DAG)-activated protein kinase C (cPKC) promotes learning and behavioral plasticity. However, knowledge of in vivo regulation and exact functions of cPKCs that affect behavior is limited. We show that PKC-2, a Caenorhabditis elegans cPKC, is essential for a complex behavior, thermotaxis. C. elegans memorizes a nutrient-associated cultivation temperature (Tc ) and migrates along the Tc within a 17 to 25°C gradient. pkc-2 gene disruption abrogated thermotaxis; a PKC-2 transgene, driven by endogenous pkc-2 promoters, restored thermotaxis behavior in pkc-2(-/-) animals. Cell-specific manipulation of PKC-2 activity revealed that thermotaxis is controlled by cooperative PKC-2-mediated signaling in both AFD sensory neurons and intestinal cells. Cold-directed migration (cryophilic drive) precedes Tc tracking during thermotaxis. Analysis of temperature-directed behaviors elicited by persistent PKC-2 activation or inhibition in AFD (or intestine) disclosed that PKC-2 regulates initiation and duration of cryophilic drive. In AFD neurons, PKC-2 is a Ca(2+) sensor and signal amplifier that operates downstream from cyclic GMP-gated cation channels and distal guanylate cyclases. UNC-18, which regulates neurotransmitter and neuropeptide release from synaptic vesicles, is a critical PKC-2 effector in AFD. UNC-18 variants, created by mutating Ser(311) or Ser(322), disrupt thermotaxis and suppress PKC-2-dependent cryophilic migration. Copyright © 2017 American Society for Microbiology.

  3. Automated identification of neurons and their locations

    CERN Document Server

    Inglis, Andrew; Roe, Dan L; Stanley, H E; Rosene, Douglas L; Urbanc, Brigita

    2007-01-01

    Individual locations of many neuronal cell bodies (>10^4) are needed to enable statistically significant measurements of spatial organization within the brain such as nearest-neighbor and microcolumnarity measurements. In this paper, we introduce an Automated Neuron Recognition Algorithm (ANRA) which obtains the (x,y) location of individual neurons within digitized images of Nissl-stained, 30 micron thick, frozen sections of the cerebral cortex of the Rhesus monkey. Identification of neurons within such Nissl-stained sections is inherently difficult due to the variability in neuron staining, the overlap of neurons, the presence of partial or damaged neurons at tissue surfaces, and the presence of non-neuron objects, such as glial cells, blood vessels, and random artifacts. To overcome these challenges and identify neurons, ANRA applies a combination of image segmentation and machine learning. The steps involve active contour segmentation to find outlines of potential neuron cell bodies followed by artificial ...

  4. Two Pairs of Mushroom Body Efferent Neurons Are Required for Appetitive Long-Term Memory Retrieval in Drosophila

    Directory of Open Access Journals (Sweden)

    Pierre-Yves Plaçais

    2013-11-01

    Full Text Available One of the challenges facing memory research is to combine network- and cellular-level descriptions of memory encoding. In this context, Drosophila offers the opportunity to decipher, down to single-cell resolution, memory-relevant circuits in connection with the mushroom bodies (MBs, prominent structures for olfactory learning and memory. Although the MB-afferent circuits involved in appetitive learning were recently described, the circuits underlying appetitive memory retrieval remain unknown. We identified two pairs of cholinergic neurons efferent from the MB α vertical lobes, named MB-V3, that are necessary for the retrieval of appetitive long-term memory (LTM. Furthermore, LTM retrieval was correlated to an enhanced response to the rewarded odor in these neurons. Strikingly, though, silencing the MB-V3 neurons did not affect short-term memory (STM retrieval. This finding supports a scheme of parallel appetitive STM and LTM processing.

  5. STDP in recurrent neuronal networks

    Directory of Open Access Journals (Sweden)

    Matthieu Gilson

    2010-09-01

    Full Text Available Recent results about spike-timing-dependent plasticity (STDP in recurrently connected neurons are reviewed, with a focus on the relationship between the weight dynamics and the emergence of network structure. In particular, the evolution of synaptic weights in the two cases of incoming connections for a single neuron and recurrent connections are compared and contrasted. A theoretical framework is used that is based upon Poisson neurons with a temporally inhomogeneous firing rate and the asymptotic distribution of weights generated by the learning dynamics. Different network configurations examined in recent studies are discussed and an overview of the current understanding of STDP in recurrently connected neuronal networks is presented.

  6. Phenotypic checkpoints regulate neuronal development.

    Science.gov (United States)

    Ben-Ari, Yehezkel; Spitzer, Nicholas C

    2010-11-01

    Nervous system development proceeds by sequential gene expression mediated by cascades of transcription factors in parallel with sequences of patterned network activity driven by receptors and ion channels. These sequences are cell type- and developmental stage-dependent and modulated by paracrine actions of substances released by neurons and glia. How and to what extent these sequences interact to enable neuronal network development is not understood. Recent evidence demonstrates that CNS development requires intermediate stages of differentiation providing functional feedback that influences gene expression. We suggest that embryonic neuronal functions constitute a series of phenotypic checkpoint signatures; neurons failing to express these functions are delayed or developmentally arrested. Such checkpoints are likely to be a general feature of neuronal development and constitute presymptomatic signatures of neurological disorders when they go awry.

  7. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  8. The island model for parallel implementation of evolutionary algorithm of Population-Based Incremental Learning (PBIL) optimization; Modelo de ilhas para a implementacao paralela do algoritmo evolucionario de otimizacao PBIL

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alan M.M. de; Schirru, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear. E-mail: alan@lmp.ufrj.br; schirru@lmp.ufrj.br

    2000-07-01

    Genetic algorithms are biologically motivated adaptive systems which have been used, with good results, for function optimization. The purpose of this work is to introduce a new parallelization method to be applied to the Population-Based Incremental Learning (PBIL) algorithm. PBIL combines standard genetic algorithm mechanisms with simple competitive learning and has ben successfully used in combinatorial optimization problems. The development of this algorithm aims its application to the reload optimization of PWR nuclear reactors. Tests have been performed with combinatorial optimization problems similar to the reload problem. Results are compared to the serial PBIL ones, showing the new method's superiority and its viability as a tool for the nuclear core reload problem solution. (author)

  9. The role of mirror neurons in language acquisition and evolution.

    Science.gov (United States)

    Behme, Christina

    2014-04-01

    I argue that Cook et al.'s attack of the genetic hypothesis of mirror neurons misses its target because the authors miss the point that genetics may specify how neurons may learn, not what they learn. Paying more attention to recent work linking mirror neurons to language acquisition and evolution would strengthen Cook et al.'s arguments against a rigid genetic hypothesis.

  10. Hebbian learning in a model with dynamic rate-coded neurons: an alternative to the generative model approach for learning receptive fields from natural scenes.

    Science.gov (United States)

    Hamker, Fred H; Wiltschut, Jan

    2007-09-01

    Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.

  11. Chronic ethanol exposure combined with high fat diet up-regulates P2X7 receptors that parallels neuroinflammation and neuronal loss in C57BL/6J mice.

    Science.gov (United States)

    Asatryan, Liana; Khoja, Sheraz; Rodgers, Kathleen E; Alkana, Ronald L; Tsukamoto, Hidekazu; Davies, Daryl L

    2015-08-15

    The present investigation tested the role of ATP-activated P2X7 receptors (P2X7Rs) in alcohol-induced brain damage using a model that combines intragastric (iG) ethanol feeding and high fat diet in C57BL/6J mice (Hybrid). The Hybrid paradigm caused increased levels of pro-inflammatory markers, changes in microglia and astrocytes, reduced levels of neuronal marker NeuN and increased P2X7R expression in ethanol-sensitive brain regions. Observed changes in P2X7R and NeuN expression were more pronounced in Hybrid paradigm with inclusion of additional weekly binges. In addition, high fat diet during Hybrid exposure aggravated the increase in P2X7R expression and activation of glial cells.

  12. Spiking neuron network Helmholtz machine.

    Science.gov (United States)

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.

  13. Crystallization and preliminary X-ray diffraction analysis of calexcitin from Loligo pealei: a neuronal protein implicated in learning and memory

    Energy Technology Data Exchange (ETDEWEB)

    Beaven, G. D. E.; Erskine, P. T.; Wright, J. N.; Mohammed, F.; Gill, R.; Wood, S. P. [School of Biological Sciences, University of Southampton, Bassett Crescent East, Southampton SO16 7PX (United Kingdom); Vernon, J.; Giese, K. P. [Wolfson Institute for Biomedical Research, University College London, Cruciform Building, Gower Street, London WC1E 6BT (United Kingdom); Cooper, J. B., E-mail: j.b.cooper@soton.ac.uk [School of Biological Sciences, University of Southampton, Bassett Crescent East, Southampton SO16 7PX (United Kingdom)

    2005-10-01

    Recombinant squid calexcitin has been crystallized using the hanging-drop vapour-diffusion technique in the orthorhombic space group P2{sub 1}2{sub 1}2{sub 1}. The neuronal protein calexcitin from the long-finned squid Loligo pealei has been expressed in Escherichia coli and purified to homogeneity. Calexcitin is a 22 kDa calcium-binding protein that becomes up-regulated in invertebrates following Pavlovian conditioning and is likely to be involved in signal transduction events associated with learning and memory. Recombinant squid calexcitin has been crystallized using the hanging-drop vapour-diffusion technique in the orthorhombic space group P2{sub 1}2{sub 1}2{sub 1}. The unit-cell parameters of a = 46.6, b = 69.2, c = 134.8 Å suggest that the crystals contain two monomers per asymmetric unit and have a solvent content of 49%. This crystal form diffracts X-rays to at least 1.8 Å resolution and yields data of high quality using synchrotron radiation.

  14. Habituation: a non-associative learning rule design for spiking neurons and an autonomous mobile robots implementation.

    Science.gov (United States)

    Cyr, André; Boukadoum, Mounir

    2013-03-01

    This paper presents a novel bio-inspired habituation function for robots under control by an artificial spiking neural network. This non-associative learning rule is modelled at the synaptic level and validated through robotic behaviours in reaction to different stimuli patterns in a dynamical virtual 3D world. Habituation is minimally represented to show an attenuated response after exposure to and perception of persistent external stimuli. Based on current neurosciences research, the originality of this rule includes modulated response to variable frequencies of the captured stimuli. Filtering out repetitive data from the natural habituation mechanism has been demonstrated to be a key factor in the attention phenomenon, and inserting such a rule operating at multiple temporal dimensions of stimuli increases a robot's adaptive behaviours by ignoring broader contextual irrelevant information.

  15. Computing Parallelism in Discourse

    CERN Document Server

    Gardent, C; Gardent, Claire; Kohlhase, Michael

    1997-01-01

    Although much has been said about parallelism in discourse, a formal, computational theory of parallelism structure is still outstanding. In this paper, we present a theory which given two parallel utterances predicts which are the parallel elements. The theory consists of a sorted, higher-order abductive calculus and we show that it reconciles the insights of discourse theories of parallelism with those of Higher-Order Unification approaches to discourse semantics, thereby providing a natural framework in which to capture the effect of parallelism on discourse semantics.

  16. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  17. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  18. Mirror neurons

    National Research Council Canada - National Science Library

    Rubia Vila, Francisco José

    2011-01-01

    Mirror neurons were recently discovered in frontal brain areas of the monkey. They are activated when the animal makes a specific movement, but also when the animal observes the same movement in another animal...

  19. Performance of a Single Quantum Neuron

    Institute of Scientific and Technical Information of China (English)

    LIFei; ZHAOShengmei; ZHENGBaoyu

    2005-01-01

    Quantum neural network (QNN) is a promising area in the field of quantum computing and quantum information processing. A novel model for quantum neuron is described, a quantum learning algorithm is proposed and its convergence property is investigated. It has been shown, Quantum neuron (QN) has the same convergence property as Conventional neuron (CN) but can attain faster training than Conventional neuron. The computational power of the quantum neuron is also explored.Numerical and graphical results show that this single quantum neuron can implement the Walsh-Hadamard transformation, perform the XOR function unrealizable with a classical neuron and can eliminate the necessity of building a network of neurons to obtain nonlinear mapping.

  20. Parallel processing ITS

    Energy Technology Data Exchange (ETDEWEB)

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  1. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  2. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  3. Developing Parallel Programs

    Directory of Open Access Journals (Sweden)

    Ranjan Sen

    2012-09-01

    Full Text Available Parallel programming is an extension of sequential programming; today, it is becoming the mainstream paradigm in day-to-day information processing. Its aim is to build the fastest programs on parallel computers. The methodologies for developing a parallelprogram can be put into integrated frameworks. Development focuses on algorithm, languages, and how the program is deployed on the parallel computer.

  4. 运动焦虑对双杠学习的影响研究%Influence of Sports Anxiety on the Learning of Parallel Bars

    Institute of Scientific and Technical Information of China (English)

    陈治强

    2015-01-01

    通过运动焦虑问卷调查学生的焦虑水平,同时考察学生双杠技能的学习成绩,探索运动焦虑对双杠技能学习的影响。结果发现不同技能成绩水平学生认知焦虑程度、躯体焦虑程度和状态自信心程度有显著性差异。技能成绩越好的学生认知焦虑和躯体焦虑水平相对较低,状态自信心水平越高。%Through the sports anxiety questionnaire survey and parallel bar skills examination of students, this paper analyzes the influence of sports anxiety on leaning of parallel bars. The results showed that different grades students have significant difference on cognitive anxiety, somatic anxiety and state self-confidence degree. Students have the better skills cognitive anxiety and somatic anxiety level is relatively low and higher level of the confidence.

  5. Massively parallel evolutionary computation on GPGPUs

    CERN Document Server

    Tsutsui, Shigeyoshi

    2013-01-01

    Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms for running parallel EAs using multicore machines, massively parallel cluster machines, or grid computing environments. Recent advances in general-purpose computing on graphics processing units (GPGPU) have opened u

  6. A Scalable Parallel Reinforcement Learning Method Based on Intelligent Scheduling%一种基于智能调度的可扩展并行强化学习方法

    Institute of Scientific and Technical Information of China (English)

    刘全; 傅启明; 杨旭东; 荆玲; 李瑾; 李娇

    2013-01-01

    Aiming at the "curse of dimensionality" problem of reinforcement learning in large state space or continuous state space, a scalable reinforcement learning method, IS-SRL method, is proposed on the basis of divide-and-conquer strategy, and its convergence is proved. In this method, the learning problem with large state space or continuous state space is divided into smaller subproblems so that each subproblem can be learned independently in memory. After a cycle of learning, next subproblem will be swapped in to continue the learning process. Information exchanges between the subproblems during the process of swap so that the learning process will converge to optima eventually. The order of subproblems' executing significantly affects the efficiency of learning. Therefore, we propose an efficient scheduling algorithm which takes advantage of the distribution of value function's backup in reinforcement learning and the idea of weighting the priorities of multiple scheduling strategies. This scheduling algorithm ensures that computation is focused on regions of the problem space which are expected to be maximally productive. To expedite the learning process, a parallel scheduling architecture, which can flexibly allocate learning tasks between learning agents, is proposed. A new method, IS-SPRL, is obtained after we blended the proposed architecture into the IS-SRL method. The experimental results show that learning based on this scheduling architecture has faster convergence speed and good scalability.%针对强化学习在大状态空间或连续状态空间中存在的“维数灾”问题,提出一种基于智能调度的可扩展并行强化学习方法—IS-SRL,并从理论上进行分析,证明其收敛性.该方法采用分而治之策略对大状态空间进行分块,使得每个分块能够调入内存独立学习.在每个分块学习了一个周期之后交换到外存上,调入下一个分块继续学习.分块之间在换入换出的过程中交换

  7. The Function of the Glutamate-Nitric Oxide-cGMP Pathway in Brain in Vivo and Learning Ability Decrease in Parallel in Mature Compared with Young Rats

    Science.gov (United States)

    Piedrafita, Blanca; Cauli, Omar; Montoliu, Carmina; Felipo, Vicente

    2007-01-01

    Aging is associated with cognitive impairment, but the underlying mechanisms remain unclear. We have recently reported that the ability of rats to learn a Y-maze conditional discrimination task depends on the function of the glutamate-nitric oxide-cGMP pathway in brain. The aims of the present work were to assess whether the ability of rats to…

  8. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number...... of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism. The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of N insertions, deletions, membership queries, and range queries...

  9. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  10. Invariants for Parallel Mapping

    Institute of Scientific and Technical Information of China (English)

    YIN Yajun; WU Jiye; FAN Qinshan; HUANG Kezhi

    2009-01-01

    This paper analyzes the geometric quantities that remain unchanged during parallel mapping (i.e., mapping from a reference curved surface to a parallel surface with identical normal direction). The second gradient operator, the second class of integral theorems, the Gauss-curvature-based integral theorems, and the core property of parallel mapping are used to derive a series of parallel mapping invadants or geometri-cally conserved quantities. These include not only local mapping invadants but also global mapping invari-ants found to exist both in a curved surface and along curves on the curved surface. The parallel mapping invadants are used to identify important transformations between the reference surface and parallel surfaces. These mapping invadants and transformations have potential applications in geometry, physics, biome-chanics, and mechanics in which various dynamic processes occur along or between parallel surfaces.

  11. Improvement of learning and memory abilities and motor function in rats with cerebral infarction by intracerebral transplantation of neuron-like cells derived from bone marrow stromal cells

    Institute of Scientific and Technical Information of China (English)

    Ying Wang; Yubin Deng; Ye Wang; Yan Li; Zhenzhen Hu

    2006-01-01

    BACKGROUND: Transplantation of fetal cell suspension or blocks of fetal tissue can ameliorate the nerve function after the injury or disease in the central nervous system,and it has been used to treat neurodegenerative disorders induced by Parkinson disease.OBJECTIVE:To observe the effects of the transplantation of neuron-like cells derived from bone marrow stromal cells (rMSCs) into the brain in restoring the dysfunctions of muscle strength and balance as well as learning and memory in rat models of cerebral infarction.DESIGN : A randomized controlled experiment.SETTING: Department of Pathophysiology, Zhongshan Medical College of Sun Yat-sen University.MATERIALS: Twenty-four male SD rats (3-4 weeks of age, weighing 200-220 g) were used in this study (Certification number:2001A027).METHODS:The experiments were carried out in Zhongshan Medical College of Sun Yat-sen University be tween December 2003 and December 2004.① Twenty-four male SD rats randomized into three groups with 8 rats in each: experimental group, control group and sham-operated group. Rats in the experiment al group and control group were induced into models of middle cerebral artery occlusion (MCAO). After in vitro cultured, purified and identified with digestion, the Fischer344 rMSCs were induced to differentiate by tanshinone IIA, which was locally injected into the striate cortex (18 area) of rats in the experimental group, and the rats in the control group were injected by L-DMEM basic culture media (without serum) of the same volume to the corresponding brain area.In the sham-operated group, only muscle and vessel of neck were separated.② At 2 and 8 weeks after the transplantation,the rats were given the screen test,prehensile-traction test,balance beam test and Morris water-maze test. ③ The survival and distribution of the induced cells in corresponding brain area were observed with Nissl stained with toluidine blue and hematoxylin and eosin (HE) staining in the groups.MAIN OUTCOME

  12. A parallel cholinergic brainstem pathway for enhancing locomotor drive

    Science.gov (United States)

    Smetana, Roy; Juvin, Laurent; Dubuc, Réjean; Alford, Simon

    2010-01-01

    The brainstem locomotor system is believed to be organized serially from the mesencephalic locomotor region (MLR) to reticulospinal neurons, which in turn, project to locomotor neurons in the spinal cord. In contrast, we now identify in lampreys, brainstem muscarinoceptive neurons receiving parallel inputs from the MLR and projecting back to reticulospinal cells to amplify and extend durations of locomotor output. These cells respond to muscarine with extended periods of excitation, receive direct muscarinic excitation from the MLR, and project glutamatergic excitation to reticulospinal neurons. Targeted block of muscarine receptors over these neurons profoundly reduces MLR-induced excitation of reticulospinal neurons and markedly slows MLR-evoked locomotion. Their presence forces us to rethink the organization of supraspinal locomotor control, to include a sustained feedforward loop that boosts locomotor output. PMID:20473293

  13. A machine learning methodology for the selection and classification of spontaneous spinal cord dorsum potentials allows disclosure of structured (non-random changes in neuronal connectivity induced by nociceptive stimulation

    Directory of Open Access Journals (Sweden)

    Mario eMartin

    2015-08-01

    Full Text Available Fractal analysis of spontaneous cord dorsum potentials (CDPs generated in the lumbosacral spinal segments has revealed that these potentials are generated by ongoing structured (non-random neuronal activity. Studies aimed to disclose the changes produced by nociceptive stimulation on the functional organization of the neuronal networks generating these potentials used predetermined templates to select specific classes of spontaneous CDPs. Since this procedure was time consuming and required continuous supervision, it was limited to the analysis of two types of CDPs (negative CDPs and negative positive CDPs, thus excluding potentials that may reflect activation of other neuronal networks of presumed functional relevance. We now present a novel procedure based in machine learning that allows the efficient and unbiased selection of a variety of spontaneous CDPs with different shapes and amplitudes. The reliability and performance of the method is evaluated by analyzing the effects on the probabilities of generation of different types of spontaneous CDPs induced by the intradermic injection of small amounts of capsaicin in the anesthetized cat.The results obtained with the selection method presently described allowed detection of spontaneous CDPs with specific shapes and amplitudes that are assumed to represent the activation of functionally coupled sets of dorsal horn neurones that acquire different, structured configurations in response to nociceptive stimuli.

  14. Computational connectionism within neurons: A model of cytoskeletal automata subserving neural networks

    Science.gov (United States)

    Rasmussen, Steen; Karampurwala, Hasnain; Vaidyanath, Rajesh; Jensen, Klaus S.; Hameroff, Stuart

    1990-06-01

    “Neural network” models of brain function assume neurons and their synaptic connections to be the fundamental units of information processing, somewhat like switches within computers. However, neurons and synapses are extremely complex and resemble entire computers rather than switches. The interiors of the neurons (and other eucaryotic cells) are now known to contain highly ordered parallel networks of filamentous protein polymers collectively termed the cytoskeleton. Originally assumed to provide merely structural “bone-like” support, cytoskeletal structures such as microtubules are now recognized to organize cell interiors dynamically. The cytoskeleton is the internal communication network for the eucaryotic cell, both by means of simple transport and by means of coordinating extremely complicated events like cell division, growth and differentiation. The cytoskeleton may therefore be viewed as the cell's “nervous system”. Consequently the neuronal cytoskeleton may be involved in molecular level information processing which subserves higher, collective neuronal functions ultimately relating to cognition. Numerous models of information processing within the cytoskeleton (in particular, microtubules) have been proposed. We have utilized cellular automata as a means to model and demonstrate the potential for information processing in cytoskeletal microtubules. In this paper, we extend previous work and simulate associative learning in a cytoskeletal network as well as assembly and disassembly of microtubules. We also discuss possible relevance and implications of cytoskeletal information processing to cognition.

  15. [Mirror neurons].

    Science.gov (United States)

    Rubia Vila, Francisco José

    2011-01-01

    Mirror neurons were recently discovered in frontal brain areas of the monkey. They are activated when the animal makes a specific movement, but also when the animal observes the same movement in another animal. Some of them also respond to the emotional expression of other animals of the same species. These mirror neurons have also been found in humans. They respond to or "reflect" actions of other individuals in the brain and are thought to represent the basis for imitation and empathy and hence the neurobiological substrate for "theory of mind", the potential origin of language and the so-called moral instinct.

  16. Learning-related brain hemispheric dominance in sleeping songbirds.

    Science.gov (United States)

    Moorman, Sanne; Gobes, Sharon M H; van de Kamp, Ferdinand C; Zandbergen, Matthijs A; Bolhuis, Johan J

    2015-03-12

    There are striking behavioural and neural parallels between the acquisition of speech in humans and song learning in songbirds. In humans, language-related brain activation is mostly lateralised to the left hemisphere. During language acquisition in humans, brain hemispheric lateralisation develops as language proficiency increases. Sleep is important for the formation of long-term memory, in humans as well as in other animals, including songbirds. Here, we measured neuronal activation (as the expression pattern of the immediate early gene ZENK) during sleep in juvenile zebra finch males that were still learning their songs from a tutor. We found that during sleep, there was learning-dependent lateralisation of spontaneous neuronal activation in the caudomedial nidopallium (NCM), a secondary auditory brain region that is involved in tutor song memory, while there was right hemisphere dominance of neuronal activation in HVC (used as a proper name), a premotor nucleus that is involved in song production and sensorimotor learning. Specifically, in the NCM, birds that imitated their tutors well were left dominant, while poor imitators were right dominant, similar to language-proficiency related lateralisation in humans. Given the avian-human parallels, lateralised neural activation during sleep may also be important for speech and language acquisition in human infants.

  17. Adaptive Neurons For Artificial Neural Networks

    Science.gov (United States)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  18. Parallelization in Modern C++

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  19. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  20. Parallel digital forensics infrastructure.

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  1. Introduction to Parallel Computing

    Science.gov (United States)

    1992-05-01

    Topology C, Ada, C++, Data-parallel FORTRAN, 2D mesh of node boards, each node FORTRAN-90 (late 1992) board has 1 application processor Devopment Tools ...parallel machines become the wave of the present, tools are increasingly needed to assist programmers in creating parallel tasks and coordinating...their activities. Linda was designed to be such a tool . Linda was designed with three important goals in mind: to be portable, efficient, and easy to use

  2. Parallel Wolff Cluster Algorithms

    Science.gov (United States)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  3. Practical Parallel Rendering

    CERN Document Server

    Chalmers, Alan

    2002-01-01

    Meeting the growing demands for speed and quality in rendering computer graphics images requires new techniques. Practical parallel rendering provides one of the most practical solutions. This book addresses the basic issues of rendering within a parallel or distributed computing environment, and considers the strengths and weaknesses of multiprocessor machines and networked render farms for graphics rendering. Case studies of working applications demonstrate, in detail, practical ways of dealing with complex issues involved in parallel processing.

  4. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  5. Approach of generating parallel programs from parallelized algorithm design strategies

    Institute of Scientific and Technical Information of China (English)

    WAN Jian-yi; LI Xiao-ying

    2008-01-01

    Today, parallel programming is dominated by message passing libraries, such as message passing interface (MPI). This article intends to simplify parallel programming by generating parallel programs from parallelized algorithm design strategies. It uses skeletons to abstract parallelized algorithm design strategies, as well as parallel architectures. Starting from problem specification, an abstract parallel abstract programming language+ (Apla+) program is generated from parallelized algorithm design strategies and problem-specific function definitions. By combining with parallel architectures, implicity of parallelism inside the parallelized algorithm design strategies is exploited. With implementation and transformation, C++ and parallel virtual machine (CPPVM) parallel program is finally generated. Parallelized branch and bound (B&B) algorithm design strategy and parallelized divide and conquer (D & C) algorithm design strategy are studied in this article as examples. And it also illustrates the approach with a case study.

  6. Patterns For Parallel Programming

    CERN Document Server

    Mattson, Timothy G; Massingill, Berna L

    2005-01-01

    From grids and clusters to next-generation game consoles, parallel computing is going mainstream. Innovations such as Hyper-Threading Technology, HyperTransport Technology, and multicore microprocessors from IBM, Intel, and Sun are accelerating the movement's growth. Only one thing is missing: programmers with the skills to meet the soaring demand for parallel software.

  7. Parallel scheduling algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  8. Neuron Learning to Network Organization.

    Science.gov (United States)

    1983-12-20

    bodies , and of the conduction of electric currents.* (Maxwell. 18.56) Modesty perhaps, but not entirely unwarranted: for in spite of his enormous talents...visual cortc\\ from R. Cajal , Histologie du Systete Nerveux. mostly hard-wired and perform a great variety of control functions took hundreds of millions of...initiating a flow of ions that alter the dentrite potential. The dendrite potentials propagate to the cell body where they are integrated and

  9. Motor Neurons

    DEFF Research Database (Denmark)

    Hounsgaard, Jorn

    2017-01-01

    Motor neurons translate synaptic input from widely distributed premotor networks into patterns of action potentials that orchestrate motor unit force and motor behavior. Intercalated between the CNS and muscles, motor neurons add to and adjust the final motor command. The identity and functional...... properties of this facility in the path from synaptic sites to the motor axon is reviewed with emphasis on voltage sensitive ion channels and regulatory metabotropic transmitter pathways. The catalog of the intrinsic response properties, their underlying mechanisms, and regulation obtained from motoneurons...... in in vitro preparations is far from complete. Nevertheless, a foundation has been provided for pursuing functional significance of intrinsic response properties in motoneurons in vivo during motor behavior at levels from molecules to systems....

  10. The origin and function of mirror neurons: the missing link.

    Science.gov (United States)

    Lingnau, Angelika; Caramazza, Alfonso

    2014-04-01

    We argue, by analogy to the neural organization of the object recognition system, that demonstration of modulation of mirror neurons by associative learning does not imply absence of genetic adaptation. Innate connectivity defines the types of processes mirror neurons can participate in while allowing for extensive local plasticity. However, the proper function of these neurons remains to be worked out.

  11. Local and commissural IC neurons make axosomatic inputs on large GABAergic tectothalamic neurons.

    Science.gov (United States)

    Ito, Tetsufumi; Oliver, Douglas L

    2014-10-15

    Large GABAergic (LG) neurons are a distinct type of neuron in the inferior colliculus (IC) identified by their dense vesicular glutamate transporter 2 (VGLUT2)-containing axosomatic synaptic terminals. Yet the sources of these terminals are unknown. Since IC glutamatergic neurons express VGLUT2, and IC neurons are known to have local collaterals, we tested the hypothesis that these excitatory, glutamatergic axosomatic inputs on LG neurons come from local axonal collaterals and commissural IC neurons. We injected a recombinant viral tracer into the IC which enabled Golgi-like green fluorescent protein (GFP) labeling in both dendrites and axons. In all cases, we found terminals positive for both GFP and VGLUT2 (GFP+/VGLUT2+) that made axosomatic contacts on LG neurons. One to six axosomatic contacts were made on a single LG cell body by a single axonal branch. The GFP-labeled neurons giving rise to the VGLUT2+ terminals on LG neurons were close by. The density of GFP+/VGLUT2+ terminals on the LG neurons was related to the number of nearby GFP-labeled cells. On the contralateral side, a smaller number of LG neurons received axosomatic contacts from GFP+/VGLUT2+ terminals. In cases with a single GFP-labeled glutamatergic neuron, the labeled axonal plexus was flat, oriented in parallel to the fibrodendritic laminae, and contacted 9-30 LG cell bodies within the plexus. Our data demonstrated that within the IC microcircuitry there is a convergence of inputs from local IC excitatory neurons on LG cell bodies. This suggests that LG neurons are heavily influenced by the activity of the nearby laminar glutamatergic neurons in the IC.

  12. [Neurons that encode sound direction].

    Science.gov (United States)

    Peña, J L

    In the auditory system, the inner ear breaks down complex signals into their spectral components, and encodes the amplitude and phase of each. In order to infer sound direction in space, a computation on each frequency component of the sound must be performed. Space specific neurons in the owl s inferior colliculus respond only to sounds coming from a particular direction and represent the results of this computation. The interaural time difference (ITD) and interaural level difference (ILD define the auditory space for the owl and are processed in separate neural pathways. The parallel pathways that process these cues merge in the external nucleus of the inferior colliculus where the space specific neurons are selective to combinations of ITD and ILD. How do inputs from the two sources interact to produce combination selectivity to ITD ILD pairs? A multiplication of postsynaptic potentials tuned to ITD and ILD can account for the subthreshold responses of these neurons to ITD ILD pairs. Examples of multiplication by neurons or neural circuits are scarce, but many computational models assume the existence of this basic operation. The owl s auditory system uses such operation to create a 2 dimensional map of auditory space. The map of space in the owl s auditory system shows important similarities with representations of space in the cerebral cortex and other sensory systems. In encoding space or other stimulus features, individual neurons appear to possess analogous functional properties related to the synthesis of high order receptive fields.

  13. Stochastic phase-change neurons

    Science.gov (United States)

    Tuma, Tomas; Pantazi, Angeliki; Le Gallo, Manuel; Sebastian, Abu; Eleftheriou, Evangelos

    2016-08-01

    Artificial neuromorphic systems based on populations of spiking neurons are an indispensable tool in understanding the human brain and in constructing neuromimetic computational systems. To reach areal and power efficiencies comparable to those seen in biological systems, electroionics-based and phase-change-based memristive devices have been explored as nanoscale counterparts of synapses. However, progress on scalable realizations of neurons has so far been limited. Here, we show that chalcogenide-based phase-change materials can be used to create an artificial neuron in which the membrane potential is represented by the phase configuration of the nanoscale phase-change device. By exploiting the physics of reversible amorphous-to-crystal phase transitions, we show that the temporal integration of postsynaptic potentials can be achieved on a nanosecond timescale. Moreover, we show that this is inherently stochastic because of the melt-quench-induced reconfiguration of the atomic structure occurring when the neuron is reset. We demonstrate the use of these phase-change neurons, and their populations, in the detection of temporal correlations in parallel data streams and in sub-Nyquist representation of high-bandwidth signals.

  14. Energy and information in Hodgkin-Huxley neurons

    CERN Document Server

    Moujahid, A; Torrealdea, F J

    2015-01-01

    The generation of spikes by neurons is energetically a costly process and the evaluation of the metabolic energy required to maintain the signalling activity of neurons a challenge of practical interest. Neuron models are frequently used to represent the dynamics of real neurons but hardly ever to evaluate the electrochemical energy required to maintain that dynamics. This paper discusses the interpretation of a Hodgkin-Huxley circuit as an energy model for real biological neurons and uses it to evaluate the consumption of metabolic energy in the transmission of information between neurons coupled by electrical synapses, i.e. gap junctions. We show that for a single postsynaptic neuron maximum energy efficiency, measured in bits of mutual information per ATP molecule consumed, requires maximum energy consumption. On the contrary, for groups of parallel postsynaptic neurons we determine values of the synaptic conductance at which the energy efficiency of the transmission presents clear maxima at relatively ver...

  15. Molecular mechanism of parallel fiber-Purkinje cell synapse formation.

    Science.gov (United States)

    Mishina, Masayoshi; Uemura, Takeshi; Yasumura, Misato; Yoshida, Tomoyuki

    2012-01-01

    The cerebellum receives two excitatory afferents, the climbing fiber (CF) and the mossy fiber-parallel fiber (PF) pathway, both converging onto Purkinje cells (PCs) that are the sole neurons sending outputs from the cerebellar cortex. Glutamate receptor δ2 (GluRδ2) is expressed selectively in cerebellar PCs and localized exclusively at the PF-PC synapses. We found that a significant number of PC spines lack synaptic contacts with PF terminals and some of residual PF-PC synapses show mismatching between pre- and postsynaptic specializations in conventional and conditional GluRδ2 knockout mice. Studies with mutant mice revealed that in addition to PF-PC synapse formation, GluRδ2 is essential for synaptic plasticity, motor learning, and the restriction of CF territory. GluRδ2 regulates synapse formation through the amino-terminal domain, while the control of synaptic plasticity, motor learning, and CF territory is mediated through the carboxyl-terminal domain. Thus, GluRδ2 is the molecule that bridges synapse formation and motor learning. We found that the trans-synaptic interaction of postsynaptic GluRδ2 and presynaptic neurexins (NRXNs) through cerebellin 1 (Cbln1) mediates PF-PC synapse formation. The synaptogenic triad is composed of one molecule of tetrameric GluRδ2, two molecules of hexameric Cbln1 and four molecules of monomeric NRXN. Thus, GluRδ2 triggers synapse formation by clustering four NRXNs. These findings provide a molecular insight into the mechanism of synapse formation in the brain.

  16. Molecular mechanism of parallel fiber-Purkinje cell synapse formation

    Directory of Open Access Journals (Sweden)

    Masayoshi eMishina

    2012-11-01

    Full Text Available The cerebellum receives two excitatory afferents, the climbing fiber (CF and the mossy fiber-parallel fiber (PF pathway, both converging onto Purkinje cells (PCs that are the sole neurons sending outputs from the cerebellar cortex. Glutamate receptor δ2 (GluRδ2 is expressed selectively in cerebellar PCs and localized exclusively at the PF-PC synapses. We found that a significant number of PC spines lack synaptic contacts with PF terminals and some of residual PF-PC synapses show mismatching between pre- and postsynaptic specializations in conventional and conditional GluRδ2 knockout mice. Studies with mutant mice revealed that in addition to PF-PC synapse formation, GluRδ2 is essential for synaptic plasticity, motor learning and the restriction of CF territory. GluRδ2 regulates synapse formation through the amino-terminal domain, while the control of synaptic plasticity, motor learning and CF territory is mediated through the carboxyl-terminal domain. Thus, GluRδ2 is the molecule that bridges synapse formation and motor learning. We found that the trans-synaptic interaction of postsynaptic GluRδ2 and presynaptic neurexins (NRXNs through Cbln1 mediates PF-PC synapse formation. The synaptogenic triad is composed of one molecule of tetrameric GluRδ2, two molecules of hexameric Cbln1 and four molecules of monomeric NRXN. Thus, GluRδ2 triggers synapse formation by clustering four NRXNs. These findings provide a molecular insight into the mechanism of synapse formation in the brain.

  17. Scalable Parallel Implementation of GEANT4 Using Commodity Hardware and Task Oriented Parallel C

    CERN Document Server

    Cooperman, G; Grinberg, V; McCauley, T; Reucroft, S; Swain, J D; Cooperman, Gene; Anchordoqui, Luis; Grinberg, Victor; Cauley, Thomas Mc; Reucroft, Stephen; Swain, John

    2000-01-01

    We describe a scalable parallelization of Geant4 using commodity hardware in a collaborative effort between the College of Computer Science and the Department of Physics at Northeastern University. The system consists of a Beowulf cluster of 32 Pentium II processors with 128 MBytes of memory each, connected via ATM and fast Ethernet. The bulk of the parallelization is done using TOP-C (Task Oriented Parallel C), software widely used in the computational algebra community. TOP-C provides a flexible and powerful framework for parallel algorithm development, is easy to learn, and is available at no cost. Its task oriented nature allows one to parallelize legacy code while hiding the details of interprocess communications. Applications include fast interactive simulation of computationally intensive processes such as electromagnetic showers. General results motivate wider applications of TOP-C to other simulation problems as well as to pattern recognition in high energy physics.

  18. A machine learning methodology for the selection and classification of spontaneous spinal cord dorsum potentials allows disclosure of structured (non-random) changes in neuronal connectivity induced by nociceptive stimulation.

    Science.gov (United States)

    Martin, Mario; Contreras-Hernández, Enrique; Béjar, Javier; Esposito, Gennaro; Chávez, Diógenes; Glusman, Silvio; Cortés, Ulises; Rudomin, Pablo

    2015-01-01

    Previous studies aimed to disclose the functional organization of the neuronal networks involved in the generation of the spontaneous cord dorsum potentials (CDPs) generated in the lumbosacral spinal segments used predetermined templates to select specific classes of spontaneous CDPs. Since this procedure was time consuming and required continuous supervision, it was limited to the analysis of two specific types of CDPs (negative CDPs and negative positive CDPs), thus excluding potentials that may reflect activation of other neuronal networks of presumed functional relevance. We now present a novel procedure based in machine learning that allows the efficient and unbiased selection of a variety of spontaneous CDPs with different shapes and amplitudes. The reliability and performance of the present method is evaluated by analyzing the effects on the probabilities of generation of different classes of spontaneous CDPs induced by the intradermic injection of small amounts of capsaicin in the anesthetized cat, a procedure known to induce a state of central sensitization leading to allodynia and hyperalgesia. The results obtained with the selection method presently described allowed detection of spontaneous CDPs with specific shapes and amplitudes that are assumed to represent the activation of functionally coupled sets of dorsal horn neurones that acquire different, structured configurations in response to nociceptive stimuli. These changes are considered as responses tending to adequate transmission of sensory information to specific functional requirements as part of homeostatic adjustments.

  19. A machine learning methodology for the selection and classification of spontaneous spinal cord dorsum potentials allows disclosure of structured (non-random) changes in neuronal connectivity induced by nociceptive stimulation

    Science.gov (United States)

    Martin, Mario; Contreras-Hernández, Enrique; Béjar, Javier; Esposito, Gennaro; Chávez, Diógenes; Glusman, Silvio; Cortés, Ulises; Rudomin, Pablo

    2015-01-01

    Previous studies aimed to disclose the functional organization of the neuronal networks involved in the generation of the spontaneous cord dorsum potentials (CDPs) generated in the lumbosacral spinal segments used predetermined templates to select specific classes of spontaneous CDPs. Since this procedure was time consuming and required continuous supervision, it was limited to the analysis of two specific types of CDPs (negative CDPs and negative positive CDPs), thus excluding potentials that may reflect activation of other neuronal networks of presumed functional relevance. We now present a novel procedure based in machine learning that allows the efficient and unbiased selection of a variety of spontaneous CDPs with different shapes and amplitudes. The reliability and performance of the present method is evaluated by analyzing the effects on the probabilities of generation of different classes of spontaneous CDPs induced by the intradermic injection of small amounts of capsaicin in the anesthetized cat, a procedure known to induce a state of central sensitization leading to allodynia and hyperalgesia. The results obtained with the selection method presently described allowed detection of spontaneous CDPs with specific shapes and amplitudes that are assumed to represent the activation of functionally coupled sets of dorsal horn neurones that acquire different, structured configurations in response to nociceptive stimuli. These changes are considered as responses tending to adequate transmission of sensory information to specific functional requirements as part of homeostatic adjustments. PMID:26379540

  20. Mirror neurons: from origin to function.

    Science.gov (United States)

    Cook, Richard; Bird, Geoffrey; Catmur, Caroline; Press, Clare; Heyes, Cecilia

    2014-04-01

    This article argues that mirror neurons originate in sensorimotor associative learning and therefore a new approach is needed to investigate their functions. Mirror neurons were discovered about 20 years ago in the monkey brain, and there is now evidence that they are also present in the human brain. The intriguing feature of many mirror neurons is that they fire not only when the animal is performing an action, such as grasping an object using a power grip, but also when the animal passively observes a similar action performed by another agent. It is widely believed that mirror neurons are a genetic adaptation for action understanding; that they were designed by evolution to fulfill a specific socio-cognitive function. In contrast, we argue that mirror neurons are forged by domain-general processes of associative learning in the course of individual development, and, although they may have psychological functions, they do not necessarily have a specific evolutionary purpose or adaptive function. The evidence supporting this view shows that (1) mirror neurons do not consistently encode action "goals"; (2) the contingency- and context-sensitive nature of associative learning explains the full range of mirror neuron properties; (3) human infants receive enough sensorimotor experience to support associative learning of mirror neurons ("wealth of the stimulus"); and (4) mirror neurons can be changed in radical ways by sensorimotor training. The associative account implies that reliable information about the function of mirror neurons can be obtained only by research based on developmental history, system-level theory, and careful experimentation.

  1. Parallel Web Mining System Based on Cloud Platform

    Institute of Scientific and Technical Information of China (English)

    Shengmei Luo; Qing He; Lixia Liu; Xiang Ao; Ning Li; Fuzhen Zhuang

    2012-01-01

    Traditional machine-learning algorithms are struggling to handle the exceedingly large amount of data being generated by the internet. In real-world applications, there is an urgent need for machine-learning algorithms to be able to handle large-scale, high-dimensional text data. Cloud computing involves the delivery of computing and storage as a service to a heterogeneous community of recipients, Recently, it has aroused much interest in industry and academia. Most previous works on cloud platforms only focus on the parallel algorithms for structured data. In this paper, we focus on the parallel implementation of web-mining algorithms and develop a parallel web-mining system that includes parallel web crawler; parallel text extract, transform and load (ETL) and modeling; and parallel text mining and application subsystems. The complete system enables variable real-world web-mining applications for mass data.

  2. Parallel Programming Paradigms

    Science.gov (United States)

    1987-07-01

    GOVT ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER 4, TITL.: td Subtitle) S. TYPE OF REPORT & PERIOD COVERED Parallel Programming Paradigms...studied. 0A ITI is Jt, t’i- StCUI-eASSIICATION OFvrHIS PAGFrm".n Def. £ntered, Parallel Programming Paradigms Philip Arne Nelson Department of Computer...8416878 and by the Office of Naval Research Contracts No. N00014-86-K-0264 and No. N00014-85- K-0328. 8 ?~~ O .G 1 49 II Parallel Programming Paradigms

  3. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  4. Java Parallel Implementations of Kohonen Self-Organizing Feature Maps

    Institute of Scientific and Technical Information of China (English)

    YANG Shang-ming; HU Jie

    2004-01-01

    The Kohonen self-organizing map (SOM) is an important tool to find a mapping from high-dimensional space to low dimensional space. The time a SOM requires increases with the number of neurons. A parallel implementation of the algorithm can make it faster. This paper investigates the most recent parallel algorithms on SOMs. Using Java network programming utilities, improved parallel and distributed system are set up to simulate these algorithms. From the simulations, we conclude that those algorithms form good feature maps.

  5. A New English–Arabic Parallel Text Corpus for Lexicographic ...

    African Journals Online (AJOL)

    rbr

    no parallel corpora for the English–Arabic language pair have yet been devel- oped. This is ... learning by discovery, enhancement of vocabulary, preparation of ... employs relevancy ranking and ignores common Arabic errors, e.g. hamza,.

  6. Hetrogenous Parallel Computing

    OpenAIRE

    2013-01-01

    With processor core counts doubling every 18-24 months and penetrating all markets from high-end servers in supercomputers to desktops and laptops down to even mobile phones, we sit at the dawn of a world of ubiquitous parallelism, one where extracting performance via parallelism is paramount. That is, the "free lunch" to better performance, where programmers could rely on substantial increases in single-threaded performance to improve software, is over. The burden falls on developers to expl...

  7. Parallel Software Model Checking

    Science.gov (United States)

    2015-01-08

    JAN 2015 2. REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE Parallel Software Model Checking 5a. CONTRACT NUMBER 5b. GRANT NUMBER...AND ADDRESS(ES) Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 8. PERFORMING ORGANIZATION REPORT NUMBER 9...3: ∧ ≥ 10 ∧ ≠ 10 ⇒ : Parallel Software Model Checking Team Members Sagar Chaki, Arie Gurfinkel

  8. Continuous parallel coordinates.

    Science.gov (United States)

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data.

  9. Dennexin peptides modeled after the homophilic binding sites of the neural cell adhesion molecule (NCAM) promote neuronal survival, modify cell adhesion and impair spatial learning

    DEFF Research Database (Denmark)

    Køhler, Lene B; Christensen, Claus; Rossetti, Clara

    2010-01-01

    Neural cell adhesion molecule (NCAM)-mediated cell adhesion results in activation of intracellular signaling cascades that lead to cellular responses such as neurite outgrowth, neuronal survival, and modulation of synaptic activity associated with cognitive processes. The crystal structure...... between Ig1 and Ig3 and between Ig2 and Ig2, respectively, observed in the crystal structure. Although the two dennexin peptides differed in amino acid sequence, they both modulated cell adhesion, reflected by inhibition of NCAM-mediated neurite outgrowth. Both dennexins also promoted neuronal survival...

  10. Bidirectional synaptic plasticity in intercalated amygdala neurons and the extinction of conditioned fear responses.

    Science.gov (United States)

    Royer, S; Paré, D

    2002-01-01

    Classical fear conditioning is believed to result from potentiation of conditioned synaptic inputs in the basolateral amygdala. That is, the conditioned stimulus would excite more neurons in the central nucleus and, via their projections to the brainstem and hypothalamus, evoke fear responses. However, much data suggests that extinction of fear responses does not depend on the reversal of these changes but on a parallel NMDA-dependent learning that competes with the first one. Because they control impulse traffic from the basolateral amygdala to the central nucleus, GABAergic neurons of the intercalated cell masses are ideally located to implement this second learning. Consistent with this hypothesis, the present study shows that low- and high-frequency stimulation of basolateral afferents respectively induce long-term depression (LTD) and potentiation (LTP) of responses in intercalated cells. Moreover, induction of LTP and LTD is prevented by application of an NMDA antagonist. To determine how these activity-dependent changes are expressed, we tested whether LTD and LTP induction are associated with modifications in paired-pulse facilitation, an index of transmitter release probability. Only LTP induction was associated with a change in paired-pulse facilitation. Depotentiation of previously potentiated synapses did not revert the modification in paired pulse facilitation, suggesting that LTP is associated with presynaptic alterations, but that LTD and depotentiation depend on postsynaptic changes. Taken together, our results suggest that basolateral synapses onto intercalated neurons can express NMDA-dependent LTP and LTD, consistent with the possibility that intercalated neurons are a critical locus of plasticity for the extinction of conditioned fear responses. Ultimately, these plastic events may prevent conditioned amygdala responses from exciting neurons of the central nucleus, and thus from evoking conditioned fear responses.

  11. Parallel finite-difference time-domain method

    CERN Document Server

    Yu, Wenhua

    2006-01-01

    The finite-difference time-domain (FTDT) method has revolutionized antenna design and electromagnetics engineering. This book raises the FDTD method to the next level by empowering it with the vast capabilities of parallel computing. It shows engineers how to exploit the natural parallel properties of FDTD to improve the existing FDTD method and to efficiently solve more complex and large problem sets. Professionals learn how to apply open source software to develop parallel software and hardware to run FDTD in parallel for their projects. The book features hands-on examples that illustrate the power of parallel FDTD and presents practical strategies for carrying out parallel FDTD. This detailed resource provides instructions on downloading, installing, and setting up the required open source software on either Windows or Linux systems, and includes a handy tutorial on parallel programming.

  12. Short- and long-term memory in Drosophila require cAMP signaling in distinct neuron types.

    Science.gov (United States)

    Blum, Allison L; Li, Wanhe; Cressy, Mike; Dubnau, Josh

    2009-08-25

    A common feature of memory and its underlying synaptic plasticity is that each can be dissected into short-lived forms involving modification or trafficking of existing proteins and long-term forms that require new gene expression. An underlying assumption of this cellular view of memory consolidation is that these different mechanisms occur within a single neuron. At the neuroanatomical level, however, different temporal stages of memory can engage distinct neural circuits, a notion that has not been conceptually integrated with the cellular view. Here, we investigated this issue in the context of aversive Pavlovian olfactory memory in Drosophila. Previous studies have demonstrated a central role for cAMP signaling in the mushroom body (MB). The Ca(2+)-responsive adenylyl cyclase RUTABAGA is believed to be a coincidence detector in gamma neurons, one of the three principle classes of MB Kenyon cells. We were able to separately restore short-term or long-term memory to a rutabaga mutant with expression of rutabaga in different subsets of MB neurons. Our findings suggest a model in which the learning experience initiates two parallel associations: a short-lived trace in MB gamma neurons, and a long-lived trace in alpha/beta neurons.

  13. OpenCL parallel programming development cookbook

    CERN Document Server

    Tay, Raymond

    2013-01-01

    OpenCL Parallel Programming Development Cookbook will provide a set of advanced recipes that can be utilized to optimize existing code. This book is therefore ideal for experienced developers with a working knowledge of C/C++ and OpenCL.This book is intended for software developers who have often wondered what to do with that newly bought CPU or GPU they bought other than using it for playing computer games; this book is also for developers who have a working knowledge of C/C++ and who want to learn how to write parallel programs in OpenCL so that life isn't too boring.

  14. Deficiency of Cks1 Leads to Learning and Long-Term Memory Defects and p27 Dependent Formation of Neuronal Cofilin Aggregates.

    Science.gov (United States)

    Kukalev, Alexander; Ng, Yiu-Ming; Ju, Limei; Saidi, Amal; Lane, Sophie; Mondragon, Angeles; Dormann, Dirk; Walker, Sophie E; Grey, William; Ho, Philip Wing-Lok; Stephens, David N; Carr, Antony M; Lamsa, Karri; Tse, Eric; Yu, Veronica P C C

    2017-01-01

    In mitotic cells, the cyclin-dependent kinase (CDK) subunit protein CKS1 regulates S phase entry by mediating degradation of the CDK inhibitor p27. Although mature neurons lack mitotic CDKs, we found that CKS1 was actively expressed in post-mitotic neurons of the adult hippocampus. Interestingly, Cks1 knockout (Cks1-/-) mice exhibited poor long-term memory, and diminished maintenance of long-term potentiation in the hippocampal circuits. Furthermore, there was neuronal accumulation of cofilin-actin rods or cofilin aggregates, which are associated with defective dendritic spine maturation and synaptic loss. We further demonstrated that it was the increased p27 level that activated cofilin by suppressing the RhoA kinase-mediated inhibitory phosphorylation of cofilin, resulting in the formation of cofilin aggregates in the Cks1-/- neuronal cells. Consistent with reports that the peptidyl-prolyl-isomerase PIN1 competes with CKS1 for p27 binding, we found that inhibition of PIN1 diminished the formation of cofilin aggregates through decreasing p27 levels, thereby activating RhoA and increasing cofilin phosphorylation. Our results revealed that CKS1 is involved in normal glutamatergic synapse development and dendritic spine maturation in adult hippocampus through modulating p27 stability. © The Author 2016. Published by Oxford University Press.

  15. Where do mirror neurons come from?

    Science.gov (United States)

    Heyes, Cecilia

    2010-03-01

    Debates about the evolution of the 'mirror neuron system' imply that it is an adaptation for action understanding. Alternatively, mirror neurons may be a byproduct of associative learning. Here I argue that the adaptation and associative hypotheses both offer plausible accounts of the origin of mirror neurons, but the associative hypothesis has three advantages. First, it provides a straightforward, testable explanation for the differences between monkeys and humans that have led some researchers to question the existence of a mirror neuron system. Second, it is consistent with emerging evidence that mirror neurons contribute to a range of social cognitive functions, but do not play a dominant, specialised role in action understanding. Finally, the associative hypothesis is supported by recent data showing that, even in adulthood, the mirror neuron system can be transformed by sensorimotor learning. The associative account implies that mirror neurons come from sensorimotor experience, and that much of this experience is obtained through interaction with others. Therefore, if the associative account is correct, the mirror neuron system is a product, as well as a process, of social interaction. (c) 2009 Elsevier Ltd. All rights reserved.

  16. Real-Time Classification of Complex Patterns Using Spike-Based Learning in Neuromorphic VLSI.

    Science.gov (United States)

    Mitra, S; Fusi, S; Indiveri, G

    2009-02-01

    Real-time classification of patterns of spike trains is a difficult computational problem that both natural and artificial networks of spiking neurons are confronted with. The solution to this problem not only could contribute to understanding the fundamental mechanisms of computation used in the biological brain, but could also lead to efficient hardware implementations of a wide range of applications ranging from autonomous sensory-motor systems to brain-machine interfaces. Here we demonstrate real-time classification of complex patterns of mean firing rates, using a VLSI network of spiking neurons and dynamic synapses which implement a robust spike-driven plasticity mechanism. The learning rule implemented is a supervised one: a teacher signal provides the output neuron with an extra input spike-train during training, in parallel to the spike-trains that represent the input pattern. The teacher signal simply indicates if the neuron should respond to the input pattern with a high rate or with a low one. The learning mechanism modifies the synaptic weights only as long as the current generated by all the stimulated plastic synapses does not match the output desired by the teacher, as in the perceptron learning rule. We describe the implementation of this learning mechanism and present experimental data that demonstrate how the VLSI neural network can learn to classify patterns of neural activities, also in the case in which they are highly correlated.

  17. Development of a skilled forelimb reach task in mice and the effects of C-8 projecting cortical spinal neuron ablation in motor learning by photothermal Au nanoparticles

    OpenAIRE

    Montenegro, Justin R.

    2015-01-01

    Motor learning is measured quantitatively through many behavioral tests. Behavioral models for motor learning observe skill acquisition and performance over a period of time within rodents. One such behavioral test is the skilled forelimb reach-to-grasp test. This skilled forelimb reach-to-grasp test has been extensively used to observe motor learning in behavioral studies and is an appropriate metric that can be used to asses experiments of the motor cortex. In this study, the skilled foreli...

  18. Mapping Generative Models onto a Network of Digital Spiking Neurons.

    Science.gov (United States)

    Pedroni, Bruno U; Das, Srinjoy; Arthur, John V; Merolla, Paul A; Jackson, Bryan L; Modha, Dharmendra S; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2016-08-01

    Stochastic neural networks such as Restricted Boltzmann Machines (RBMs) have been successfully used in applications ranging from speech recognition to image classification, and are particularly interesting because of their potential for generative tasks. Inference and learning in these algorithms use a Markov Chain Monte Carlo procedure called Gibbs sampling, where a logistic function forms the kernel of this sampler. On the other side of the spectrum, neuromorphic systems have shown great promise for low-power and parallelized cognitive computing, but lack well-suited applications and automation procedures. In this work, we propose a systematic method for bridging the RBM algorithm and digital neuromorphic systems, with a generative pattern completion task as proof of concept. For this, we first propose a method of producing the Gibbs sampler using bio-inspired digital noisy integrate-and-fire neurons. Next, we describe the process of mapping generative RBMs trained offline onto the IBM TrueNorth neurosynaptic processor-a low-power digital neuromorphic VLSI substrate. Mapping these algorithms onto neuromorphic hardware presents unique challenges in network connectivity and weight and bias quantization, which, in turn, require architectural and design strategies for the physical realization. Generative performance is analyzed to validate the neuromorphic requirements and to best select the neuron parameters for the model. Lastly, we describe a design automation procedure which achieves optimal resource usage, accounting for the novel hardware adaptations. This work represents the first implementation of generative RBM inference on a neuromorphic VLSI substrate.

  19. Parallel Magnetic Resonance Imaging

    CERN Document Server

    Uecker, Martin

    2015-01-01

    The main disadvantage of Magnetic Resonance Imaging (MRI) are its long scan times and, in consequence, its sensitivity to motion. Exploiting the complementary information from multiple receive coils, parallel imaging is able to recover images from under-sampled k-space data and to accelerate the measurement. Because parallel magnetic resonance imaging can be used to accelerate basically any imaging sequence it has many important applications. Parallel imaging brought a fundamental shift in image reconstruction: Image reconstruction changed from a simple direct Fourier transform to the solution of an ill-conditioned inverse problem. This work gives an overview of image reconstruction from the perspective of inverse problems. After introducing basic concepts such as regularization, discretization, and iterative reconstruction, advanced topics are discussed including algorithms for auto-calibration, the connection to approximation theory, and the combination with compressed sensing.

  20. Parallel optical sampler

    Energy Technology Data Exchange (ETDEWEB)

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  1. Neuronal circuits of fear extinction.

    Science.gov (United States)

    Herry, Cyril; Ferraguti, Francesco; Singewald, Nicolas; Letzkus, Johannes J; Ehrlich, Ingrid; Lüthi, Andreas

    2010-02-01

    Fear extinction is a form of inhibitory learning that allows for the adaptive control of conditioned fear responses. Although fear extinction is an active learning process that eventually leads to the formation of a consolidated extinction memory, it is a fragile behavioural state. Fear responses can recover spontaneously or subsequent to environmental influences, such as context changes or stress. Understanding the neuronal substrates of fear extinction is of tremendous clinical relevance, as extinction is the cornerstone of psychological therapy of several anxiety disorders and because the relapse of maladaptative fear and anxiety is a major clinical problem. Recent research has begun to shed light on the molecular and cellular processes underlying fear extinction. In particular, the acquisition, consolidation and expression of extinction memories are thought to be mediated by highly specific neuronal circuits embedded in a large-scale brain network including the amygdala, prefrontal cortex, hippocampus and brain stem. Moreover, recent findings indicate that the neuronal circuitry of extinction is developmentally regulated. Here, we review emerging concepts of the neuronal circuitry of fear extinction, and highlight novel findings suggesting that the fragile phenomenon of extinction can be converted into a permanent erasure of fear memories. Finally, we discuss how research on genetic animal models of impaired extinction can further our understanding of the molecular and genetic bases of human anxiety disorders.

  2. Electrical signals polarize neuronal organelles, direct neuron migration, and orient cell division.

    Science.gov (United States)

    Yao, Li; McCaig, Colin D; Zhao, Min

    2009-09-01

    During early brain development, the axis of division of neuronal precursor cells is regulated tightly and can determine whether neurons remain in the germinal layers or migrate away. Directed neuronal migration depends on the establishment of cell polarity, and cells are polarized dynamically in response to extracellular signals. Endogenous electric fields (EFs) orient cell division and direct migration of a variety of cell types. Here, we show that cell division of cultured hippocampal cells (neuron-like cells and glial-like cells) is oriented strikingly by an applied EF, which also directs neuronal migration. Directed migration involves polarization of the leading neurite, of the microtubule-associated protein MAP-2 and of the Golgi apparatus and the centrosome, all of which reposition asymmetrically to face the cathode. Pharmacological inhibition of Rho-associated coiled-coil forming protein kinases (ROCK) and phosphoinositide 3-kinase decreased, leading neurite orientation and Golgi polarization in the neurons in response to an EF and in parallel decreased the directedness of EF-guided neuronal migration. This work demonstrates that the axis of hippocampal cell division, the establishment of neuronal polarity, the polarization of intracellular structures, and the direction of neuronal migration are all regulated by an extracellular electrical cue.

  3. SPINning parallel systems software.

    Energy Technology Data Exchange (ETDEWEB)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-03-15

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin.

  4. Coarrars for Parallel Processing

    Science.gov (United States)

    Snyder, W. Van

    2011-01-01

    The design of the Coarray feature of Fortran 2008 was guided by answering the question "What is the smallest change required to convert Fortran to a robust and efficient parallel language." Two fundamental issues that any parallel programming model must address are work distribution and data distribution. In order to coordinate work distribution and data distribution, methods for communication and synchronization must be provided. Although originally designed for Fortran, the Coarray paradigm has stimulated development in other languages. X10, Chapel, UPC, Titanium, and class libraries being developed for C++ have the same conceptual framework.

  5. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  6. Fitting Neuron Models to Spike Trains

    Science.gov (United States)

    Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  7. ADAPTATION OF PARALLEL VIRTUAL MACHINES MECHANISMS TO PARALLEL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Zafer DEMİR

    2001-02-01

    Full Text Available In this study, at first, Parallel Virtual Machine is reviewed. Since It is based upon parallel processing, it is similar to parallel systems in principle in terms of architecture. Parallel Virtual Machine is neither an operating system nor a programming language. It is a specific software tool that supports heterogeneous parallel systems. However, it takes advantage of the features of both to make users close to parallel systems. Since tasks can be executed in parallel on parallel systems by Parallel Virtual Machine, there is an important similarity between PVM and distributed systems and multiple processors. In this study, the relations in question are examined by making use of Master-Slave programming technique. In conclusion, the PVM is tested with a simple factorial computation on a distributed system to observe its adaptation to parallel architects.

  8. Shaping Neuronal Network Activity by Presynaptic Mechanisms.

    Directory of Open Access Journals (Sweden)

    Ayal Lavi

    2015-09-01

    Full Text Available Neuronal microcircuits generate oscillatory activity, which has been linked to basic functions such as sleep, learning and sensorimotor gating. Although synaptic release processes are well known for their ability to shape the interaction between neurons in microcircuits, most computational models do not simulate the synaptic transmission process directly and hence cannot explain how changes in synaptic parameters alter neuronal network activity. In this paper, we present a novel neuronal network model that incorporates presynaptic release mechanisms, such as vesicle pool dynamics and calcium-dependent release probability, to model the spontaneous activity of neuronal networks. The model, which is based on modified leaky integrate-and-fire neurons, generates spontaneous network activity patterns, which are similar to experimental data and robust under changes in the model's primary gain parameters such as excitatory postsynaptic potential and connectivity ratio. Furthermore, it reliably recreates experimental findings and provides mechanistic explanations for data obtained from microelectrode array recordings, such as network burst termination and the effects of pharmacological and genetic manipulations. The model demonstrates how elevated asynchronous release, but not spontaneous release, synchronizes neuronal network activity and reveals that asynchronous release enhances utilization of the recycling vesicle pool to induce the network effect. The model further predicts a positive correlation between vesicle priming at the single-neuron level and burst frequency at the network level; this prediction is supported by experimental findings. Thus, the model is utilized to reveal how synaptic release processes at the neuronal level govern activity patterns and synchronization at the network level.

  9. Effective stimuli for constructing reliable neuron models.

    Directory of Open Access Journals (Sweden)

    Shaul Druckmann

    2011-08-01

    Full Text Available The rich dynamical nature of neurons poses major conceptual and technical challenges for unraveling their nonlinear membrane properties. Traditionally, various current waveforms have been injected at the soma to probe neuron dynamics, but the rationale for selecting specific stimuli has never been rigorously justified. The present experimental and theoretical study proposes a novel framework, inspired by learning theory, for objectively selecting the stimuli that best unravel the neuron's dynamics. The efficacy of stimuli is assessed in terms of their ability to constrain the parameter space of biophysically detailed conductance-based models that faithfully replicate the neuron's dynamics as attested by their ability to generalize well to the neuron's response to novel experimental stimuli. We used this framework to evaluate a variety of stimuli in different types of cortical neurons, ages and animals. Despite their simplicity, a set of stimuli consisting of step and ramp current pulses outperforms synaptic-like noisy stimuli in revealing the dynamics of these neurons. The general framework that we propose paves a new way for defining, evaluating and standardizing effective electrical probing of neurons and will thus lay the foundation for a much deeper understanding of the electrical nature of these highly sophisticated and non-linear devices and of the neuronal networks that they compose.

  10. Brain state-dependent neuronal computation

    Directory of Open Access Journals (Sweden)

    Pascale eQuilichini

    2012-10-01

    Full Text Available Neuronal firing pattern, which includes both the frequency and the timing of action potentials, is a key component of information processing in the brain. Although the relationship between neuronal output (the firing pattern and function (during a task/behavior is not fully understood, there is now considerable evidence that a given neuron can show very different firing patterns according to brain state. Thus, such neurons assembled into neuronal networks generate different rhythms (e.g. theta, gamma, sharp wave ripples, which sign specific brain states (e.g. learning, sleep. This implies that a given neuronal network, defined by its hard-wired physical connectivity, can support different brain state-dependent activities through the modulation of its functional connectivity. Here, we review data demonstrating that not only the firing pattern, but also the functional connections between neurons, can change dynamically. We then explore the possible mechanisms of such versatility, focusing on the intrinsic properties of neurons and the properties of the synapses they establish, and how they can be modified by neuromodulators, i.e. the different ways that neurons can use to switch from one mode of communication to the other.

  11. Neuronal Analogues of Conditioning Paradigms

    Science.gov (United States)

    1984-04-24

    Although the mechanisms of interneuronal communication have been well established, the changes underlying most forms of learning have thus far eluded...stimulating electrodes on one of the connectives was adjusted so as to produce a small excitatory postsynaptic potential ( EPSP ) in the impaled cell...two stimuli would constitute a neuronal analogue of conditioning by producing an increased EPSP in response to the test stimulus alone. If so, then

  12. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  13. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  14. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  15. Parallel and Distributed Databases

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Kemper, Alfons; Prieto, Manuel; Szalay, Alex

    2009-01-01

    Euro-Par Topic 5 addresses data management issues in parallel and distributed computing. Advances in data management (storage, access, querying, retrieval, mining) are inherent to current and future information systems. Today, accessing large volumes of information is a reality: Data-intensive appli

  16. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  17. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    2001-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were implemente

  18. Implementation of Parallel Algorithms

    Science.gov (United States)

    1991-09-30

    Lecture Notes in Computer Science , Warwich, England, July 16-20... Lecture Notes in Computer Science , Springer-Verlag, Bangalor, India, December 1990. J. Reif, J. Canny, and A. Page, "An Exact Algorithm for Kinodynamic...Parallel Algorithms and its Impact on Computational Geometry, in Optimal Algorithms, H. Djidjev editor, Springer-Verlag Lecture Notes in Computer Science

  19. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  20. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  1. Imagery May Arise from Associations Formed through Sensory Experience: A Network of Spiking Neurons Controlling a Robot Learns Visual Sequences in Order to Perform a Mental Rotation Task

    Science.gov (United States)

    McKinstry, Jeffrey L.; Fleischer, Jason G.; Chen, Yanqing; Gall, W. Einar; Edelman, Gerald M.

    2016-01-01

    Mental imagery occurs “when a representation of the type created during the initial phases of perception is present but the stimulus is not actually being perceived.” How does the capability to perform mental imagery arise? Extending the idea that imagery arises from learned associations, we propose that mental rotation, a specific form of imagery, could arise through the mechanism of sequence learning–that is, by learning to regenerate the sequence of mental images perceived while passively observing a rotating object. To demonstrate the feasibility of this proposal, we constructed a simulated nervous system and embedded it within a behaving humanoid robot. By observing a rotating object, the system learns the sequence of neural activity patterns generated by the visual system in response to the object. After learning, it can internally regenerate a similar sequence of neural activations upon briefly viewing the static object. This system learns to perform a mental rotation task in which the subject must determine whether two objects are identical despite differences in orientation. As with human subjects, the time taken to respond is proportional to the angular difference between the two stimuli. Moreover, as reported in humans, the system fills in intermediate angles during the task, and this putative mental rotation activates the same pathways that are activated when the system views physical rotation. This work supports the proposal that mental rotation arises through sequence learning and the idea that mental imagery aids perception through learned associations, and suggests testable predictions for biological experiments. PMID:27653977

  2. [Neuronal network].

    Science.gov (United States)

    Langmeier, M; Maresová, D

    2005-01-01

    Function of the central nervous system is based on mutual relations among the nerve cells. Description of nerve cells and their processes, including their contacts was enabled by improvement of optical features of the microscope and by the development of impregnation techniques. It is associated with the name of Antoni van Leeuwenhoek (1632-1723), J. Ev. Purkyne (1787-1869), Camillo Golgi (1843-1926), and Ramón y Cajal (1852-1934). Principal units of the neuronal network are the synapses. The term synapse was introduced into neurophysiology by Charles Scott Sherrington (1857-1952). Majority of the interactions between nerve cells is mediated by neurotransmitters acting at the receptors of the postsynaptic membrane or at the autoreceptors of the presynaptic part of the synapse. Attachment of the vesicles to the presynaptic membrane and the release of the neurotransmitter into the synaptic cleft depend on the intracellular calcium concentration and on the presence of several proteins in the presynaptic element.

  3. Spiking Neurons for Analysis of Patterns

    Science.gov (United States)

    Huntsberger, Terrance

    2008-01-01

    Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological

  4. 氧化苦参碱对癫痫大鼠学习记忆功能及海马神经元的影响%Effect of oxymatrine on learning and memory and hippocampus neuron in epileptic rats

    Institute of Scientific and Technical Information of China (English)

    张琳娜; 张丽霞; 白洁

    2012-01-01

    Objective To explore the effects of oxymatrine on learning and memory and the hippocampus neurons in epileptic rats induced by penicillin. Methods Forty-eight SD rats were randomly divided into normal saline( NS ) group, epilepsy( EP ) group, oxymatraine( OXY ) group and phenobarbital sodium( PBS ) group. Penicillin was injected intraperitoneally in rats to establish the epilepsy model. Then behavioral changes were observed. The stay time of rats and the swimming length of rats in regions A and C were observed through Morris water maze video tracking system. The spatial learning and memory was tested by Morris water maze( MWM ). And the changes of hippocampus neuron in rats were observed. Results The stay time of rats and the swimming length of rats in region A in PBS group and OXY group were longer than those in EP group( P < 0. 05 ). The abilities of learning and memory in OXY group were significantly improved. After epilepsy induced by penicillin, the neuron tumefaction and organelle damnification were found. But after using OXY,the neuron tumefaction and organelle damnification were alleviated. Conclusion OXY could protect the spatial learning and memory in epileptic rats.%目的 观察氧化苦参碱(oxymatrine,OXY) 对青霉素(penicillin,PEN) 致痫大鼠学习记忆功能和海马神经元形态的影响.方法 成年SD 大鼠48只随机分为生理盐水对照组(normal saline,NS)、癫痫模型组(epilepsy,EP)、氧化苦参碱组(OXY)和苯巴比妥钠阳性对照组(PBS),腹腔注射青霉素诱导大鼠癫痫发作,观察大鼠行为学改变,Morris 水迷宫检测大鼠空间学习和记忆,观察大鼠海马区神经元细胞形态学改变.结果 通过Morris水迷宫视频跟踪系统观察各组大鼠在A区和C区停留的时间和在A区和C区游泳路径的长度,PBS 组和OXY 组在A区的停留时间和在A区的游泳路径比EP 组明显增长(P<0.05),对大鼠空间记忆能力有所改善.NS组神经细胞结构正常;EP组可见神

  5. 一种基于神经元强化学习的网络拥塞控制方法%A CONGESTION CONTROL SCHEME BASED ON NEURON REINFORCEMENT LEARNING

    Institute of Scientific and Technical Information of China (English)

    周川; 狄东杰; 陈庆伟; 郭毓

    2011-01-01

    提出了一种基于神经元强化学习(Neuron-based Reinforcement Learning,NRL)的自适应AQM算法,采用链路速率和队列长度作为拥塞指示,可根据网络环境的变化在线自动调整神经元参数,从而保持良好的队列长度稳定性和对网络负载波动的鲁棒性.该算法结构简单、易于实现,且不依赖对象的模型.仿真结果表明,该算法尤其适合于解决复杂不确定性网络的拥塞控制问题,并具有更好的队列稳定性和鲁棒性.

  6. Mechanisms underlying the social enhancement of vocal learning in songbirds.

    Science.gov (United States)

    Chen, Yining; Matheson, Laura E; Sakata, Jon T

    2016-06-14

    Social processes profoundly influence speech and language acquisition. Despite the importance of social influences, little is known about how social interactions modulate vocal learning. Like humans, songbirds learn their vocalizations during development, and they provide an excellent opportunity to reveal mechanisms of social influences on vocal learning. Using yoked experimental designs, we demonstrate that social interactions with adult tutors for as little as 1 d significantly enhanced vocal learning. Social influences on attention to song seemed central to the social enhancement of learning because socially tutored birds were more attentive to the tutor's songs than passively tutored birds, and because variation in attentiveness and in the social modulation of attention significantly predicted variation in vocal learning. Attention to song was influenced by both the nature and amount of tutor song: Pupils paid more attention to songs that tutors directed at them and to tutors that produced fewer songs. Tutors altered their song structure when directing songs at pupils in a manner that resembled how humans alter their vocalizations when speaking to infants, that was distinct from how tutors changed their songs when singing to females, and that could influence attention and learning. Furthermore, social interactions that rapidly enhanced learning increased the activity of noradrenergic and dopaminergic midbrain neurons. These data highlight striking parallels between humans and songbirds in the social modulation of vocal learning and suggest that social influences on attention and midbrain circuitry could represent shared mechanisms underlying the social modulation of vocal learning.

  7. To Parallelize or Not to Parallelize, Speed Up Issue

    CERN Document Server

    Elnashar, Alaa Ismail

    2011-01-01

    Running parallel applications requires special and expensive processing resources to obtain the required results within a reasonable time. Before parallelizing serial applications, some analysis is recommended to be carried out to decide whether it will benefit from parallelization or not. In this paper we discuss the issue of speed up gained from parallelization using Message Passing Interface (MPI) to compromise between the overhead of parallelization cost and the gained parallel speed up. We also propose an experimental method to predict the speed up of MPI applications.

  8. Collisionless parallel shocks

    Science.gov (United States)

    Khabibrakhmanov, I. KH.; Galeev, A. A.; Galinskii, V. L.

    1993-01-01

    Consideration is given to a collisionless parallel shock based on solitary-type solutions of the modified derivative nonlinear Schroedinger equation (MDNLS) for parallel Alfven waves. The standard derivative nonlinear Schroedinger equation is generalized in order to include the possible anisotropy of the plasma distribution and higher-order Korteweg-de Vies-type dispersion. Stationary solutions of MDNLS are discussed. The anisotropic nature of 'adiabatic' reflections leads to the asymmetric particle distribution in the upstream as well as in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves, which can significantly contribute to the shock thermalization.

  9. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  10. Parallel clustering with CFinder

    CERN Document Server

    Pollner, Peter; Vicsek, Tamas; 10.1142/S0129626412400014

    2012-01-01

    The amount of available data about complex systems is increasing every year, measurements of larger and larger systems are collected and recorded. A natural representation of such data is given by networks, whose size is following the size of the original system. The current trend of multiple cores in computing infrastructures call for a parallel reimplementation of earlier methods. Here we present the grid version of CFinder, which can locate overlapping communities in directed, weighted or undirected networks based on the clique percolation method (CPM). We show that the computation of the communities can be distributed among several CPU-s or computers. Although switching to the parallel version not necessarily leads to gain in computing time, it definitely makes the community structure of extremely large networks accessible.

  11. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  12. Ultrascalable petaflop parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton On Hudson, NY); Chiu, George (Cross River, NY); Cipolla, Thomas M. (Katonah, NY); Coteus, Paul W. (Yorktown Heights, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hall, Shawn (Pleasantville, NY); Haring, Rudolf A. (Cortlandt Manor, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kopcsay, Gerard V. (Yorktown Heights, NY); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY); Takken, Todd (Brewster, NY)

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  13. Homology, convergence and parallelism.

    Science.gov (United States)

    Ghiselin, Michael T

    2016-01-05

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. © 2015 The Author(s).

  14. Robotic Episodic Learning and Behaviour Control Integrated with Neuron Stimulation Mechanism%融合神经元激励机制的机器人情景学习与行为控制

    Institute of Scientific and Technical Information of China (English)

    刘冬; 丛明; 高森; 韩晓东; 杜宇

    2014-01-01

    针对不确定环境下机器人行为控制的维数灾难和感知混淆问题,引入神经元激励机制,提出一种情景记忆驱动的马尔可夫决策过程(EM-MDP)以实现机器人对环境经验自主学习,及多源不确定性条件下的行为控制。首先,构建情景记忆模型,并基于认知神经科学提出事件中状态神经元激活及组织机制。其次,基于自适应共振理论(ART)与稀疏分布记忆(SDM)通过Hebbian规则实现情景记忆的自主学习,采用神经元突触势能建立机器人行为控制策略,机器人能够评估过去的事件序列,预测当前状态并规划期望的行为。最后,实验结果验证,该模型框架与控制策略能够实现机器人在普遍场景中的行为控制目标。%There are problems of curse of dimensionality and perceptual aliasing in robot behaviour control under uncer-tainty. To solve the problem, a framework called episodic memory-driving Markov decision process (EM-MDP) is proposed by introducing neuron stimulation mechanism, in order to achieve environmental experience self-learning and behaviour con-trol under multi-source uncertainty. Firstly, an episodic memory model is built, and an activation and organization mechanism of state neurons is proposed based on cognitive neuroscience. Secondly, self-learning of episodic memory is realized by u-tilizing adaptive resonance theory (ART) and sparse distributed memory (SDM) through Hebbian rules. A robot behaviour control strategy is established by neuron synaptic potential. Robot can evaluate the past events sequence, predict the current state and plan the desired behaviour. Finally, the experimental results show that the model and control strategy can achieve the objectives of robot behaviour control in universal scenes.

  15. Parallel programming with MPI

    Energy Technology Data Exchange (ETDEWEB)

    Tatebe, Osamu [Electrotechnical Lab., Tsukuba, Ibaraki (Japan)

    1998-03-01

    MPI is a practical, portable, efficient and flexible standard for message passing, which has been implemented on most MPPs and network of workstations by machine vendors, universities and national laboratories. MPI avoids specifying how operations will take place and superfluous work to achieve efficiency as well as portability, and is also designed to encourage overlapping communication and computation to hide communication latencies. This presentation briefly explains the MPI standard, and comments on efficient parallel programming to improve performance. (author)

  16. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  17. Implementation of Parallel Algorithms

    Science.gov (United States)

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  18. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  19. Parallel Algorithms Derivation

    Science.gov (United States)

    1989-03-31

    Lecture Notes in Computer Science , Warwich, England, July 16.20, 1990. J. Reif and J. Storer, "A Parallel Architecture for...34, The 10th Conference on Foundations of Software Technology and Theoretical Computer Science, Lecture Notes in Computer Science , Springer-Verlag...Geometry, in Optimal Algorithms, H. Djidjev editor, Springer-Verlag Lecture Notes in Computer Science 401, 1989, 1.8.. J. Reif, R. Paturi, and S.

  20. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  1. Parallel Feature Extraction System

    Institute of Scientific and Technical Information of China (English)

    MAHuimin; WANGYan

    2003-01-01

    Very high speed image processing is needed in some application specially for weapon. In this paper, a high speed image feature extraction system with parallel structure was implemented by Complex programmable logic device (CPLD), and it can realize image feature extraction in several microseconds almost with no delay. This system design is presented by an application instance of flying plane, whose infrared image includes two kinds of feature: geometric shape feature in the binary image and temperature-feature in the gray image. Accordingly the feature extraction is taken on the two kind features. Edge and area are two most important features of the image. Angle often exists in the connection of the different parts of the target's image, which indicates that one area ends and the other area begins. The three key features can form the whole presentation of an image. So this parallel feature extraction system includes three processing modules: edge extraction, angle extraction and area extraction. The parallel structure is realized by a group of processors, every detector is followed by one route of processor, every route has the same circuit form, and works together at the same time controlled by a set of clock to realize feature extraction. The extraction system has simple structure, small volume, high speed, and better stability against noise. It can be used in the war field recognition system.

  2. The Parallel C Preprocessor

    Directory of Open Access Journals (Sweden)

    Eugene D. Brooks III

    1992-01-01

    Full Text Available We describe a parallel extension of the C programming language designed for multiprocessors that provide a facility for sharing memory between processors. The programming model was initially developed on conventional shared memory machines with small processor counts such as the Sequent Balance and Alliant FX/8, but has more recently been used on a scalable massively parallel machine, the BBN TC2000. The programming model is split-join rather than fork-join. Concurrency is exploited to use a fixed number of processors more efficiently rather than to exploit more processors as in the fork-join model. Team splitting, a mechanism to split the team of processors executing a code into subteams to handle parallel subtasks, is used to provide an efficient mechanism to exploit nested concurrency. We have found the split-join programming model to have an inherent implementation advantage, compared to the fork-join model, when the number of processors in a machine becomes large.

  3. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    -parallel TFO strand was modified with Y with one or two insertions at the end of the TFO strand, the thermal stability was increased 1.2 °C and 3 °C at pH 7.2, respectively, whereas one insertion in the middle of the TFO strand decreased the thermal stability 1.4 °C compared to the wild type oligonucleotide......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...... chain, especially at the end of the TFO strand. On the other hand, the thermal stability of the anti-parallel triplex was dramatically decreased when the TFO strand was modified with the LNA monomer analog Z in the middle of the TFO strand (ΔTm = -9.1 °C). Also the thermal stability decreased...

  4. Precise synaptic efficacy alignment suggests potentiation dominated learning

    Directory of Open Access Journals (Sweden)

    Christoph eHartmann

    2016-01-01

    Full Text Available Recent evidence suggests that parallel synapses from the same axonal branch onto the same dendritic branch have almost identical strength. It has been proposed that this alignment is only possible through learning rules that integrate activity over long time spans. However, learning mechanisms such as spike-timing-dependent plasticity (STDP are commonly assumed to be temporally local. Here, we propose that the combination of temporally local STDP and a multiplicative synaptic normalization mechanism is sufficient to explain the alignment of parallel synapses.To address this issue, we introduce three increasingly complex models: First, we model the idealized interaction of STDP and synaptic normalization in a single neuron as a simple stochastic process and derive analytically that the alignment effect can be described by a so-called Kesten process. From this we can derive that synaptic efficacy alignment requires potentiation-dominated learning regimes. We verify these conditions in a single-neuron model with independent spiking activities but more realistic synapses. As expected, we only observe synaptic efficacy alignment for long-term potentiation-biased STDP. Finally, we explore how well the findings transfer to recurrent neural networks where the learning mechanisms interact with the correlated activity of the network. We find that due to the self-reinforcing correlations in recurrent circuits under STDP, alignment occurs for both long-term potentiation- and depression-biased STDP, because the learning will be potentiation dominated in both cases due to the potentiating events induced by correlated activity. This is in line with recent results demonstrating a dominance of potentiation over depression during waking and normalization during sleep. This leads us to predict that individual spine pairs will be more similar in the morning than they are after sleep depriviation.In conclusion, we show that synaptic normalization in conjunction with

  5. Mirror neurons: functions, mechanisms and models.

    Science.gov (United States)

    Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael A

    2013-04-12

    Mirror neurons for manipulation fire both when the animal manipulates an object in a specific way and when it sees another animal (or the experimenter) perform an action that is more or less similar. Such neurons were originally found in macaque monkeys, in the ventral premotor cortex, area F5 and later also in the inferior parietal lobule. Recent neuroimaging data indicate that the adult human brain is endowed with a "mirror neuron system," putatively containing mirror neurons and other neurons, for matching the observation and execution of actions. Mirror neurons may serve action recognition in monkeys as well as humans, whereas their putative role in imitation and language may be realized in human but not in monkey. This article shows the important role of computational models in providing sufficient and causal explanations for the observed phenomena involving mirror systems and the learning processes which form them, and underlines the need for additional circuitry to lift up the monkey mirror neuron circuit to sustain the posited cognitive functions attributed to the human mirror neuron system. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. Plastic changes to dendritic spines on layer V pyramidal neurons are involved in the rectifying role of the prefrontal cortex during the fast period of motor learning.

    Science.gov (United States)

    González-Tapia, David; Martínez-Torres, Nestor I; Hernández-González, Marisela; Guevara, Miguel Angel; González-Burgos, Ignacio

    2016-02-01

    The prefrontal cortex participates in the rectification of information related to motor activity that favors motor learning. Dendritic spine plasticity is involved in the modifications of motor patterns that underlie both motor activity and motor learning. To study this association in more detail, adult male rats were trained over six days in an acrobatic motor learning paradigm and they were subjected to a behavioral evaluation on each day of training. Also, a Golgi-based morphological study was carried out to determine the spine density and the proportion of the different spine types. In the learning paradigm, the number of errors diminished as motor training progressed. Concomitantly, spine density increased on days 1 and 3 of training, particularly reflecting an increase in the proportion of thin (day 1), stubby (day 1) and branched (days 1, 2 and 5) spines. Conversely, mushroom spines were less prevalent than in the control rats on days 5 and 6, as were stubby spines on day 6, together suggesting that this plasticity might enhance motor learning. The increase in stubby spines on day 1 suggests a regulation of excitability related to the changes in synaptic input to the prefrontal cortex. The plasticity to thin spines observed during the first 3 days of training could be related to the active rectification induced by the information relayed to the prefrontal cortex -as the behavioral findings indeed showed-, which in turn could be linked to the lower proportion of mushroom and stubby spines seen in the last days of training.

  7. General artificial neuron

    Science.gov (United States)

    Degeratu, Vasile; Schiopu, Paul; Degeratu, Stefania

    2007-05-01

    In this paper the authors present a model of artificial neuron named the general artificial neuron. Depending on application this neuron can change self number of inputs, the type of inputs (from excitatory in inhibitory or vice versa), the synaptic weights, the threshold, the type of intensifying functions. It is achieved into optoelectronic technology. Also, into optoelectronic technology a model of general McCulloch-Pitts neuron is showed. The advantages of these neurons are very high because we have to solve different applications with the same neural network, achieved from these neurons, named general neural network.

  8. COEXPRESSION OF FOS IMMUNOREACTIVITY IN PROTEIN-KINASE (PKC-GAMMA)-POSITIVE NEURONS - QUANTITATIVE-ANALYSIS OF A BRAIN REGION INVOLVED IN LEARNING

    NARCIS (Netherlands)

    AMBALAVANAR, R; VANDERZEE, EA; BOLHUIS, JJ; MCCABE, BJ; HORN, G

    1993-01-01

    The expression of the gamma protein kinase C isoenzyme (PKCgamma) and of the c-fos immediate early gene protein product Fos in the intermediate and medial hyperstriatum ventrale (IMHV) of day-old chicks was determined immunocytochemically. Previous research has shown that (a) there is a learning-rel

  9. Co-expression of Fos immunoreactivity in protein kinase (PKCγ)-positive neurones : quantitative analysis of a brain region involved in learning

    NARCIS (Netherlands)

    Ambalavanar, R.; Zee, E.A. van der; Bolhuis, J.J.; McCabe, B.J.; Horn, G.

    1993-01-01

    The expression of the gamma protein kinase C isoenzyme (PKCγ) and of the c-fos immediate early gene protein product Fos in the intermediate and medial hyperstriatum ventrale (IMHV) of day-old chicks was determined immunocytochemically. Previous research has shown that (a) there is a learning-related

  10. Lightweight Specifications for Parallel Correctness

    Science.gov (United States)

    2012-12-05

    series (series), encryption and decryption (crypt), and LU factorization (lufact) — as well as a parallel molecular dynamic simulator (moldyn), ray...111, 57, 132]). The PJ benchmarks include an app computing a Monte Carlo approximation of π (pi), a parallel cryptographic key cracking app (keysearch3...an app for parallel rendering Mandelbrot Set images (mandelbrot), and a parallel branch-and-bound search for optimal phylogenetic trees (phylogeny

  11. Architectural Adaptability in Parallel Programming

    Science.gov (United States)

    1991-05-01

    I AD-A247 516 Architectural Adaptability in Parallel Programming Lawrence Alan Crowl Technical Report 381 May 1991 92-06322 UNIVERSITY OF ROC R...COMPUTER SCIENCE Best Avai~lable Copy Architectural Adaptability in Parallel Programming by Lawrence Alan Crowl Submitted in Partial Fulfillment of the...in the development of their programs. In applying abstraction to parallel programming , we can use abstractions to represent potential parallelism

  12. Parallel Architectures and Bioinspired Algorithms

    CERN Document Server

    Pérez, José; Lanchares, Juan

    2012-01-01

    This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.

  13. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  14. Massively Parallel Genetics.

    Science.gov (United States)

    Shendure, Jay; Fields, Stanley

    2016-06-01

    Human genetics has historically depended on the identification of individuals whose natural genetic variation underlies an observable trait or disease risk. Here we argue that new technologies now augment this historical approach by allowing the use of massively parallel assays in model systems to measure the functional effects of genetic variation in many human genes. These studies will help establish the disease risk of both observed and potential genetic variants and to overcome the problem of "variants of uncertain significance." Copyright © 2016 by the Genetics Society of America.

  15. Parallel Eclipse Project Checkout

    Science.gov (United States)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  16. CSM parallel structural methods research

    Science.gov (United States)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  17. Automatically tracking neurons in a moving and deforming brain

    CERN Document Server

    Nguyen, Jeffrey P; Plummer, George S; Shaevitz, Joshua W; Leifer, Andrew M

    2016-01-01

    Advances in optical neuroimaging techniques now allow neural activity to be recorded with cellular resolution in awake and behaving animals. Brain motion in these recordings pose a unique challenge. The location of individual neurons must be tracked in 3D over time to accurately extract single neuron activity traces. Recordings from small invertebrates like C. elegans are especially challenging because they undergo very large brain motion and deformation during animal movement. Here we present an automated computer vision pipeline to reliably track populations of neurons with single neuron resolution in the brain of a freely moving C. elegans undergoing large motion and deformation. 3D volumetric fluorescent images of the animal's brain are straightened, aligned and registered, and the locations of neurons in the images are found via segmentation. Each neuron is then assigned an identity using a new time-independent machine-learning approach we call Neuron Registration Vector Encoding. In this approach, non-r...

  18. Applied Parallel Metadata Indexing

    Energy Technology Data Exchange (ETDEWEB)

    Jacobi, Michael R [Los Alamos National Laboratory

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  19. Fast parallel event reconstruction

    CERN Document Server

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  20. Theory of Parallel Mechanisms

    CERN Document Server

    Huang, Zhen; Ding, Huafeng

    2013-01-01

    This book contains mechanism analysis and synthesis. In mechanism analysis, a mobility methodology is first systematically presented. This methodology, based on the author's screw theory, proposed in 1997, of which the generality and validity was only proved recently,  is a very complex issue, researched by various scientists over the last 150 years. The principle of kinematic influence coefficient and its latest developments are described. This principle is suitable for kinematic analysis of various 6-DOF and lower-mobility parallel manipulators. The singularities are classified by a new point of view, and progress in position-singularity and orientation-singularity is stated. In addition, the concept of over-determinate input is proposed and a new method of force analysis based on screw theory is presented. In mechanism synthesis, the synthesis for spatial parallel mechanisms is discussed, and the synthesis method of difficult 4-DOF and 5-DOF symmetric mechanisms, which was first put forward by the a...

  1. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  2. Long-term memory and response generalization in mushroom body extrinsic neurons in the honeybee Apis mellifera.

    Science.gov (United States)

    Haehnel, Melanie; Menzel, Randolf

    2012-02-01

    Honeybees learn to associate an odor with sucrose reward under conditions that allow the monitoring of neural activity by imaging Ca(2+) transients in morphologically identified neurons. Here we report such recordings from mushroom body extrinsic neurons - which belong to a recurrent tract connecting the output of the mushroom body with its input, potentially providing inhibitory feedback - and other extrinsic neurons. The neurons' responses to the learned odor and two novel control odors were measured 24 h after learning. We found that calcium responses to the learned odor and an odor that was strongly generalized with it were enhanced compared with responses to a weakly generalized control. Thus, the physiological responses measured in these extrinsic neurons accurately reflect what is observed in behavior. We conclude that the recorded recurrent neurons feed information back to the mushroom body about the features of learned odor stimuli. Other extrinsic neurons may signal information about learned odors to different brain regions.

  3. Learning

    Directory of Open Access Journals (Sweden)

    Mohsen Laabidi

    2014-01-01

    Full Text Available Nowadays learning technologies transformed educational systems with impressive progress of Information and Communication Technologies (ICT. Furthermore, when these technologies are available, affordable and accessible, they represent more than a transformation for people with disabilities. They represent real opportunities with access to an inclusive education and help to overcome the obstacles they met in classical educational systems. In this paper, we will cover basic concepts of e-accessibility, universal design and assistive technologies, with a special focus on accessible e-learning systems. Then, we will present recent research works conducted in our research Laboratory LaTICE toward the development of an accessible online learning environment for persons with disabilities from the design and specification step to the implementation. We will present, in particular, the accessible version “MoodleAcc+” of the well known e-learning platform Moodle as well as new elaborated generic models and a range of tools for authoring and evaluating accessible educational content.

  4. Juvenil neuronal ceroid lipofuscinosis

    DEFF Research Database (Denmark)

    Ostergaard, J R; Hertz, Jens Michael

    1998-01-01

    Neuronal ceroid-lipofuscinosis is a group of neurodegenerative diseases which are characterized by an abnormal accumulation of lipopigment in neuronal and extraneuronal cells. The diseases can be differentiated into several subgroups according to age of onset, the clinical picture...

  5. Statins Increase Neurogenesis in the Dentate Gyrus, Reduce Delayed Neuronal Death in the Hippocampal CA3 Region, and Improve Spatial Learning in Rat after Traumatic Brain Injury

    OpenAIRE

    Lu, Dunyue; Qu, Changsheng; Goussev, Anton; Jiang, Hao; Lu, Chang; Schallert, Timothy; Mahmood, Asim; Chen, Jieli; Li, Yi; Chopp, Michael

    2007-01-01

    Traumatic brain injury (TBI) remains a major public health problem globally. Presently, there is no way to restore cognitive deficits caused by TBI. In this study, we seek to evaluate the effect of statins (simvastatin and atorvastatin) on the spatial learning and neurogenesis in rats subjected to controlled cortical impact. Rats were treated with atorvastatin and simvastatin 1 day after TBI and daily for 14 days. Morris water maze tests were performed during weeks 2 and 5 after TBI. Bromodeo...

  6. NEURON and Python

    OpenAIRE

    Michael Hines; Davison, Andrew P.; Eilif Muller

    2009-01-01

    The NEURON simulation program now allows Python to be used, alone or in combination with NEURON's traditional Hoc interpreter. Adding Python to NEURON has the immediate benefit of making available a very extensive suite of analysis tools written for engineering and science. It also catalyzes NEURON software development by offering users a modern programming tool that is recognized for its flexibility and power to create and maintain complex programs. At the same time, nothing is lost because ...

  7. C++ and Massively Parallel Computers

    Directory of Open Access Journals (Sweden)

    Daniel J. Lickly

    1993-01-01

    Full Text Available Our goal is to apply the software engineering advantages of object-oriented programming to the raw power of massively parallel architectures. To do this we have constructed a hierarchy of C++ classes to support the data-parallel paradigm. Feasibility studies and initial coding can be supported by any serial machine that has a C++ compiler. Parallel execution requires an extended Cfront, which understands the data-parallel classes and generates C* code. (C* is a data-parallel superset of ANSI C developed by Thinking Machines Corporation. This approach provides potential portability across parallel architectures and leverages the existing compiler technology for translating data-parallel programs onto both SIMD and MIMD hardware.

  8. Computer Assisted Parallel Program Generation

    CERN Document Server

    Kawata, Shigeo

    2015-01-01

    Parallel computation is widely employed in scientific researches, engineering activities and product development. Parallel program writing itself is not always a simple task depending on problems solved. Large-scale scientific computing, huge data analyses and precise visualizations, for example, would require parallel computations, and the parallel computing needs the parallelization techniques. In this Chapter a parallel program generation support is discussed, and a computer-assisted parallel program generation system P-NCAS is introduced. Computer assisted problem solving is one of key methods to promote innovations in science and engineering, and contributes to enrich our society and our life toward a programming-free environment in computing science. Problem solving environments (PSE) research activities had started to enhance the programming power in 1970's. The P-NCAS is one of the PSEs; The PSE concept provides an integrated human-friendly computational software and hardware system to solve a target ...

  9. Parallel Ecological Speciation in Plants?

    Directory of Open Access Journals (Sweden)

    Katherine L. Ostevik

    2012-01-01

    Full Text Available Populations that have independently evolved reproductive isolation from their ancestors while remaining reproductively cohesive have undergone parallel speciation. A specific type of parallel speciation, known as parallel ecological speciation, is one of several forms of evidence for ecology's role in speciation. In this paper we search the literature for candidate examples of parallel ecological speciation in plants. We use four explicit criteria (independence, isolation, compatibility, and selection to judge the strength of evidence for each potential case. We find that evidence for parallel ecological speciation in plants is unexpectedly scarce, especially relative to the many well-characterized systems in animals. This does not imply that ecological speciation is uncommon in plants. It only implies that evidence from parallel ecological speciation is rare. Potential explanations for the lack of convincing examples include a lack of rigorous testing and the possibility that plants are less prone to parallel ecological speciation than animals.

  10. Parallel Computing in SCALE

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D [ORNL; Williams, Mark L [ORNL; Bowman, Stephen M [ORNL

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  11. Application of Game Theory to Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Alfons Schuster

    2010-01-01

    Full Text Available The paper is a theoretical investigation into the potential application of game theoretic concepts to neural networks (natural and artificial. The paper relies on basic models but the findings are more general in nature and therefore should apply to more complex environments. A major outcome of the paper is a learning algorithm based on game theory for a paired neuron system.

  12. Parallel Polarization State Generation

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  13. Accelerated Parallel Texture Optimization

    Institute of Scientific and Technical Information of China (English)

    Hao-Da Huang; Xin Tong; Wen-Cheng Wang

    2007-01-01

    Texture optimization is a texture synthesis method that can efficiently reproduce various features of exemplar textures. However, its slow synthesis speed limits its usage in many interactive or real time applications. In this paper, we propose a parallel texture optimization algorithm to run on GPUs. In our algorithm, k-coherence search and principle component analysis (PCA) are used for hardware acceleration, and two acceleration techniques are further developed to speed up our GPU-based texture optimization. With a reasonable precomputation cost, the online synthesis speed of our algorithm is 4000+ times faster than that of the original texture optimization algorithm and thus our algorithm is capable of interactive applications. The advantages of the new scheme are demonstrated by applying it to interactive editing of flow-guided synthesis.

  14. Parallel Polarization State Generation

    CERN Document Server

    She, Alan

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristi...

  15. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take.

  16. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  17. Learning, memory, and the role of neural network architecture.

    Science.gov (United States)

    Hermundstad, Ann M; Brown, Kevin S; Bassett, Danielle S; Carlson, Jean M

    2011-06-01

    The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  18. Learning, memory, and the role of neural network architecture.

    Directory of Open Access Journals (Sweden)

    Ann M Hermundstad

    2011-06-01

    Full Text Available The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  19. New Reflections on Mirror Neuron Research, the Tower of Babel, and Intercultural Education

    Science.gov (United States)

    Westbrook, Timothy Paul

    2015-01-01

    Studies of the human mirror neuron system demonstrate how mental mimicking of one's social environment affects learning. The mirror neuron system also has implications for intercultural encounters. This article explores the common ground between the mirror neuron system and theological principles from the Tower of Babel narrative and applies them…

  20. New Reflections on Mirror Neuron Research, the Tower of Babel, and Intercultural Education

    Science.gov (United States)

    Westbrook, Timothy Paul

    2015-01-01

    Studies of the human mirror neuron system demonstrate how mental mimicking of one's social environment affects learning. The mirror neuron system also has implications for intercultural encounters. This article explores the common ground between the mirror neuron system and theological principles from the Tower of Babel narrative and applies them…

  1. The Application of Big-Neuron Theory in Expert Systems

    Institute of Scientific and Technical Information of China (English)

    李涛

    2001-01-01

    With a new way of knowledge representation and acquirement, inference, and building an expert system based on big-neurons composed of different field expert knowledge presented, the fundamental theory and architecture of expert system based upon big-neuron theory has thus been built. It is unnecessary to organize a large number of production rules when using big-neurons to build an expert system. The facts and rules of an expert system have already been hidden in big-neurons. And also, it is unnecessary to do a great quantity of tree searching when using this method to do logic reasoning. Machine can do self-organizing and self-learning.

  2. Toxoplasma gondii infection in the brain inhibits neuronal degeneration and learning and memory impairments in a murine model of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Bong-Kwang Jung

    Full Text Available Immunosuppression is a characteristic feature of Toxoplasma gondii-infected murine hosts. The present study aimed to determine the effect of the immunosuppression induced by T. gondii infection on the pathogenesis and progression of Alzheimer's disease (AD in Tg2576 AD mice. Mice were infected with a cyst-forming strain (ME49 of T. gondii, and levels of inflammatory mediators (IFN-γ and nitric oxide, anti-inflammatory cytokines (IL-10 and TGF-β, neuronal damage, and β-amyloid plaque deposition were examined in brain tissues and/or in BV-2 microglial cells. In addition, behavioral tests, including the water maze and Y-maze tests, were performed on T. gondii-infected and uninfected Tg2576 mice. Results revealed that whereas the level of IFN-γ was unchanged, the levels of anti-inflammatory cytokines were significantly higher in T. gondii-infected mice than in uninfected mice, and in BV-2 cells treated with T. gondii lysate antigen. Furthermore, nitrite production from primary cultured brain microglial cells and BV-2 cells was reduced by the addition of T. gondii lysate antigen (TLA, and β-amyloid plaque deposition in the cortex and hippocampus of Tg2576 mouse brains was remarkably lower in T. gondii-infected AD mice than in uninfected controls. In addition, water maze and Y-maze test results revealed retarded cognitive capacities in uninfected mice as compared with infected mice. These findings demonstrate the favorable effects of the immunosuppression induced by T. gondii infection on the pathogenesis and progression of AD in Tg2576 mice.

  3. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  4. Parallel coding schemes of whisker velocity in the rat's somatosensory system.

    Science.gov (United States)

    Lottem, Eran; Gugig, Erez; Azouz, Rony

    2015-03-15

    The function of rodents' whisker somatosensory system is to transform tactile cues, in the form of vibrissa vibrations, into neuronal responses. It is well established that rodents can detect numerous tactile stimuli and tell them apart. However, the transformation of tactile stimuli obtained through whisker movements to neuronal responses is not well-understood. Here we examine the role of whisker velocity in tactile information transmission and its coding mechanisms. We show that in anaesthetized rats, whisker velocity is related to the radial distance of the object contacted and its own velocity. Whisker velocity is accurately and reliably coded in first-order neurons in parallel, by both the relative time interval between velocity-independent first spike latency of rapidly adapting neurons and velocity-dependent first spike latency of slowly adapting neurons. At the same time, whisker velocity is also coded, although less robustly, by the firing rates of slowly adapting neurons. Comparing first- and second-order neurons, we find similar decoding efficiencies for whisker velocity using either temporal or rate-based methods. Both coding schemes are sufficiently robust and hardly affected by neuronal noise. Our results suggest that whisker kinematic variables are coded by two parallel coding schemes and are disseminated in a similar way through various brain stem nuclei to multiple brain areas. Copyright © 2015 the American Physiological Society.

  5. Morphology and ontogeny of rat perirhinal cortical neurons.

    Science.gov (United States)

    Furtak, Sharon Christine; Moyer, James Russell; Brown, Thomas Huntington

    2007-12-10

    Golgi-impregnated neurons from rat perirhinal cortex (PR) were classified into one of 15 distinct morphological categories (N = 6,891). The frequency of neurons in each cell class was determined as a function of the layer of PR and the age of the animal, which ranged from postnatal day 0 (P0) to young adulthood (P45). The developmental appearance of Golgi-impregnated neurons conformed to the expected "inside-out" pattern of development, meaning that cells populated in deep before superficial layers of PR. The relative frequencies of different cell types changed during the first 2 weeks of postnatal development. The largest cells, which were pyramidal and spiny multipolar neurons, appeared earliest. Aspiny stellate neurons were the last to appear. The total number of Golgi-impregnated neurons peaked at P10-12, corresponding to the time of eye-opening. This early increase in the number of impregnated neurons parallels observations in other cortical areas. The relative frequency of the 15 cell types remained constant between P14 to P45. The proportion of pyramidal neurons in PR ( approximately 50%) was much smaller than is typical of neocortex ( approximately 70%). A correspondingly larger proportion of PR neurons were nonpyramidal cells that are less common in neocortex. The relative frequency distribution of cell types creates an overall impression of considerable morphological diversity, which is arguably related to the particular manner in which this periallocortical brain region processes and stores information.

  6. Study of a new neuron

    CERN Document Server

    Adler, Stephen Louis; Weckel, J D

    1994-01-01

    We study a modular neuron alternative to the McCulloch-Pitts neuron that arises naturally in analog devices in which the neuron inputs are represented as coherent oscillatory wave signals. Although the modular neuron can compute XOR at the one neuron level, it is still characterized by the same Vapnik-Chervonenkis dimension as the standard neuron. We give the formulas needed for constructing networks using the new neuron and training them using back-propagation. A numerical study of the modular neuron on two data sets is presented, which demonstrates that the new neuron performs at least as well as the standard neuron.

  7. Trajectories in parallel optics.

    Science.gov (United States)

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  8. Memristors Empower Spiking Neurons With Stochasticity

    KAUST Repository

    Al-Shedivat, Maruan

    2015-06-01

    Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning algorithms. However, such systems must have two crucial features: 1) the neurons should follow a specific behavioral model, and 2) stochastic spiking should be implemented efficiently for it to be scalable. This paper proposes a memristor-based stochastically spiking neuron that fulfills these requirements. First, the analytical model of the memristor is enhanced so it can capture the behavioral stochasticity consistent with experimentally observed phenomena. The switching behavior of the memristor model is demonstrated to be akin to the firing of the stochastic spike response neuron model, the primary building block for probabilistic algorithms in spiking neural networks. Furthermore, the paper proposes a neural soma circuit that utilizes the intrinsic nondeterminism of memristive switching for efficient spike generation. The simulations and analysis of the behavior of a single stochastic neuron and a winner-take-all network built of such neurons and trained on handwritten digits confirm that the circuit can be used for building probabilistic sampling and pattern adaptation machinery in spiking networks. The findings constitute an important step towards scalable and efficient probabilistic neuromorphic platforms. © 2011 IEEE.

  9. Parallel Programming in the Age of Ubiquitous Parallelism

    Science.gov (United States)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  10. Parallel Backtracking with Answer Memoing for Independent And-Parallelism

    CERN Document Server

    de Guzmán, Pablo Chico; Carro, Manuel; Hermenegildo, Manuel V

    2011-01-01

    Goal-level Independent and-parallelism (IAP) is exploited by scheduling for simultaneous execution two or more goals which will not interfere with each other at run time. This can be done safely even if such goals can produce multiple answers. The most successful IAP implementations to date have used recomputation of answers and sequentially ordered backtracking. While in principle simplifying the implementation, recomputation can be very inefficient if the granularity of the parallel goals is large enough and they produce several answers, while sequentially ordered backtracking limits parallelism. And, despite the expected simplification, the implementation of the classic schemes has proved to involve complex engineering, with the consequent difficulty for system maintenance and extension, while still frequently running into the well-known trapped goal and garbage slot problems. This work presents an alternative parallel backtracking model for IAP and its implementation. The model features parallel out-of-or...

  11. An FPGA-Based Silicon Neuronal Network with Selectable Excitability Silicon Neurons.

    Science.gov (United States)

    Li, Jing; Katori, Yuichi; Kohno, Takashi

    2012-01-01

    This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN) and the transmitter release based silicon synapse, allow us to tune the excitability of silicon neurons and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with 256 full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs.

  12. Neurons in Primary Motor Cortex Engaged During Action Observation

    Science.gov (United States)

    Dushanova, Juliana; Donoghue, John

    2010-01-01

    Neurons in higher cortical areas appear to become active during action observation, either by mirroring observed actions (termed mirror neurons) or by eliciting mental rehearsal of observed motor acts. We report the existence of neurons in primary motor cortex (MI) responding to viewed actions, an area generally considered to initiate and guide movement performance. Multielectrode recordings in monkeys performing or observing a well-learned step tracking task showed that approximately half of MI neurons, active when monkeys performed the task, were also active when they observed the action being performed by a human. These ‘view’ neurons were spatially intermingled with ‘do’ neurons, active only during movement performance. Simultaneously recorded, ‘view’ neurons comprised two groups: ∼38% retained the same preferred direction (PD) and timing during performance and viewing, while the remainder (62%) changed their PDs and time lag during viewing compared with performance. Nevertheless, population activity during viewing was sufficient to predict the direction and trajectory of viewed movements as action unfolded, although less accurately than during performance. ‘View’ neurons became less active and contained poorer representations of action when viewing only sub-components of the task. MI ‘view’ neurons thus appear to reflect the aspects of a learned movement when observed in others and form part of a broadly engaged set of cortical areas routinely responding to learned behaviors. These findings suggest that viewing a learned action elicits replay of aspects of MI activity needed to perform the observed action and could additionally reflect processing related to understanding, learning or mentally rehearsing action. PMID:20074212

  13. Separate groups of dopamine neurons innervate caudate head and tail encoding flexible and stable value memories.

    Science.gov (United States)

    Kim, Hyoung F; Ghazizadeh, Ali; Hikosaka, Okihide

    2014-01-01

    Dopamine (DA) neurons are thought to be critical for reward value-based learning by modifying synaptic transmissions in the striatum. Yet, different regions of the striatum seem to guide different kinds of learning. Do DA neurons contribute to the regional differences of the striatum in learning? As a first step to answer this question, we examined whether the head and tail of the caudate nucleus of the monkey (Macaca mulatta) receive inputs from the same or different DA neurons. We chose these caudate regions because we previously showed that caudate head neurons learn values of visual objects quickly and flexibly, whereas caudate tail neurons learn object values slowly but retain them stably. Here we confirmed the functional difference by recording single neuronal activity while the monkey performed the flexible and stable value tasks, and then injected retrograde tracers in the functional domains of caudate head and tail. The projecting dopaminergic neurons were identified using tyrosine hydroxylase immunohistochemistry. We found that two groups of DA neurons in the substantia nigra pars compacta project largely separately to the caudate head and tail. These groups of DA neurons were mostly separated topographically: head-projecting neurons were located in the rostral-ventral-medial region, while tail-projecting neurons were located in the caudal-dorsal-lateral regions of the substantia nigra. Furthermore, they showed different morphological features: tail-projecting neurons were larger and less circular than head-projecting neurons. Our data raise the possibility that different groups of DA neurons selectively guide learning of flexible (short-term) and stable (long-term) memories of object values.

  14. Separate groups of dopamine neurons innervate caudate head and tail encoding flexible and stable value memories

    Directory of Open Access Journals (Sweden)

    Hyoung F Kim

    2014-10-01

    Full Text Available Dopamine neurons are thought to be critical for reward value-based learning by modifying synaptic transmissions in the striatum. Yet, different regions of the striatum seem to guide different kinds of learning. Do dopamine neurons contribute to the regional differences of the striatum in learning? As a first step to answer this question, we examined whether the head and tail of the caudate nucleus of the monkey (Macaca mulatta receive inputs from the same or different dopamine neurons. We chose these caudate regions because we previously showed that caudate head neurons learn values of visual objects quickly and flexibly, whereas caudate tail neurons learn object values slowly but retain them stably. Here we confirmed the functional difference by recording single neuronal activity while the monkey performed the flexible and stable value tasks, and then injected retrograde tracers in the functional domains of caudate head and tail. The projecting dopaminergic neurons were identified using tyrosine hydroxylase immunohistochemistry. We found that two groups of dopamine neurons in the substantia nigra pars compacta project largely separately to the caudate head and tail. These groups of dopamine neurons were mostly separated topographically: head-projecting neurons were located in the rostral-ventral-medial region, while tail-projecting neurons were located in the caudal-dorsal-lateral regions of the substantia nigra. Furthermore, they showed different morphological features: tail-projecting neurons were larger and less circular than head-projecting neurons. Our data raise the possibility that different groups of dopamine neurons selectively guide learning of flexible (short-term and stable (long-term memories of object values.

  15. Stereological estimates of dopaminergic, GABAergic and glutamatergic neurons in the ventral tegmental area, substantia nigra and retrorubral field in the rat

    OpenAIRE

    Nair-Roberts, R.G.; Chatelain-Badie, S.D.; Benson, E.; White-Cooper, H; BOLAM, J. P.; Ungless, M.A.

    2008-01-01

    Midbrain dopamine neurons in the ventral tegmental area, substantia nigra and retrorubral field play key roles in reward processing, learning and memory, and movement. Within these midbrain regions and admixed with the dopamine neurons, are also substantial populations of GABAergic neurons that regulate dopamine neuron activity and have projection targets similar to those of dopamine neurons. Additionally, there is a small group of putative glutamatergic neurons within the ventral tegmental a...

  16. Parallel Adaptive Mesh Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  17. Noise and Neuronal Heterogeneity

    CERN Document Server

    Barber, Michael J

    2010-01-01

    We consider signal transaction in a simple neuronal model featuring intrinsic noise. The presence of noise limits the precision of neural responses and impacts the quality of neural signal transduction. We assess the signal transduction quality in relation to the level of noise, and show it to be maximized by a non-zero level of noise, analogous to the stochastic resonance effect. The quality enhancement occurs for a finite range of stimuli to a single neuron; we show how to construct networks of neurons that extend the range. The range increases more rapidly with network size when we make use of heterogeneous populations of neurons with a variety of thresholds, rather than homogeneous populations of neurons all with the same threshold. The limited precision of neural responses thus can have a direct effect on the optimal network structure, with diverse functional properties of the constituent neurons supporting an economical information processing strategy that reduces the metabolic costs of handling a broad...

  18. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  19. Parallel Computers in Signal Processing

    Directory of Open Access Journals (Sweden)

    Narsingh Deo

    1985-07-01

    Full Text Available Signal processing often requires a great deal of raw computing power for which it is important to take a look at parallel computers. The paper reviews various types of parallel computer architectures from the viewpoint of signal and image processing.

  20. Parallel context-free languages

    DEFF Research Database (Denmark)

    Skyum, Sven

    1974-01-01

    The relation between the family of context-free languages and the family of parallel context-free languages is examined in this paper. It is proved that the families are incomparable. Finally we prove that the family of languages of finite index is contained in the family of parallel context......-free languages....

  1. Distinct roles for direct and indirect pathway striatal neurons in reinforcement

    OpenAIRE

    Kravitz, Alexxai V.; Tye, Lynne D.; Kreitzer, Anatol C.

    2012-01-01

    Dopamine signaling is implicated in reinforcement learning, but the neural substrates targeted by dopamine are poorly understood. Here, we bypassed dopamine signaling itself and tested how optogenetic activation of dopamine D1- or D2-receptor-expressing striatal projection neurons influenced reinforcement learning in mice. Stimulating D1-expressing neurons induced persistent reinforcement, whereas stimulating D2-expressing neurons induced transient punishment, demonstrating that activation of...

  2. Distinct roles for direct and indirect pathway striatal neurons in reinforcement.

    Science.gov (United States)

    Kravitz, Alexxai V; Tye, Lynne D; Kreitzer, Anatol C

    2012-06-01

    Dopamine signaling is implicated in reinforcement learning, but the neural substrates targeted by dopamine are poorly understood. We bypassed dopamine signaling itself and tested how optogenetic activation of dopamine D1 or D2 receptor–expressing striatal projection neurons influenced reinforcement learning in mice. Stimulating D1 receptor–expressing neurons induced persistent reinforcement, whereas stimulating D2 receptor–expressing neurons induced transient punishment, indicating that activation of these circuits is sufficient to modify the probability of performing future actions.

  3. Neurons and tumor suppressors.

    Science.gov (United States)

    Zochodne, Douglas W

    2014-08-20

    Neurons choose growth pathways with half hearted reluctance, behavior that may be appropriate to maintain fixed long lasting connections but not to regenerate them. We now recognize that intrinsic brakes on regrowth are widely expressed in these hesitant neurons and include classical tumor suppressor molecules. Here, we review how two brakes, PTEN (phosphatase and tensin homolog deleted on chromosome 10) and retinoblastoma emerge as new and exciting knockdown targets to enhance neuron plasticity and improve outcome from damage or disease.

  4. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...... a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... cerebellum during the symmetric movements. These findings suggest the presence of different error-monitoring mechanisms for symmetric and parallel movements. The results indicate that separate areas within PMd and SMA are responsible for both perception and performance of ongoing movements...

  5. Parallel contingency statistics with Titan.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  6. Worker flexibility in a parallel dual resource constrained job shop

    NARCIS (Netherlands)

    Yue, H.; Slomp, J.; Molleman, E.; van der Zee, D.J.

    2008-01-01

    In this paper we investigate cross-training policies in a dual resource constraint (DRC) parallel job shop where new part types are frequently introduced into the system. Each new part type introduction induces the need for workers to go through a learning curve. A cross-training policy relates to t

  7. Modelling small-patterned neuronal networks coupled to microelectrode arrays

    Science.gov (United States)

    Massobrio, Paolo; Martinoia, Sergio

    2008-09-01

    Cultured neurons coupled to planar substrates which exhibit 'well-defined' two-dimensional network architectures can provide valuable insights into cell-to-cell communication, network dynamics versus topology, and basic mechanisms of synaptic plasticity and learning. In the literature several approaches were presented to drive neuronal growth, such as surface modification by silane chemistry, photolithographic techniques, microcontact printing, microfluidic channel flow patterning, microdrop patterning, etc. This work presents a computational model fit for reproducing and explaining the dynamics exhibited by small-patterned neuronal networks coupled to microelectrode arrays (MEAs). The model is based on the concept of meta-neuron, i.e., a small spatially confined number of actual neurons which perform single macroscopic functions. Each meta-neuron is characterized by a detailed morphology, and the membrane channels are modelled by simple Hodgkin-Huxley and passive kinetics. The two main findings that emerge from the simulations can be summarized as follows: (i) the increasing complexity of meta-neuron morphology reflects the variations of the network dynamics as a function of network development; (ii) the dynamics displayed by the patterned neuronal networks considered can be explained by hypothesizing the presence of several short- and a few long-term distance interactions among small assemblies of neurons (i.e., meta-neurons).

  8. Computational properties of networks of synchronous groups of spiking neurons.

    Science.gov (United States)

    Dayhoff, Judith E

    2007-09-01

    We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.

  9. Confounding the origin and function of mirror neurons.

    Science.gov (United States)

    Rizzolatti, Giacomo

    2014-04-01

    Cook et al. argue that mirror neurons originate in sensorimotor associative learning and that their function is determined by their origin. Both these claims are hard to accept. It is here suggested that a major role in the origin of the mirror mechanism is played by top-down connections rather than by associative learning.

  10. Altered learning, memory, and social behavior in type 1 taste receptor subunit 3 knock-out mice are associated with neuronal dysfunction.

    Science.gov (United States)

    Martin, Bronwen; Wang, Rui; Cong, Wei-Na; Daimon, Caitlin M; Wu, Wells W; Ni, Bin; Becker, Kevin G; Lehrmann, Elin; Wood, William H; Zhang, Yongqing; Etienne, Harmonie; van Gastel, Jaana; Azmi, Abdelkrim; Janssens, Jonathan; Maudsley, Stuart

    2017-07-07

    The type 1 taste receptor member 3 (T1R3) is a G protein-coupled receptor involved in sweet-taste perception. Besides the tongue, the T1R3 receptor is highly expressed in brain areas implicated in cognition, including the hippocampus and cortex. As cognitive decline is often preceded by significant metabolic or endocrinological dysfunctions regulated by the sweet-taste perception system, we hypothesized that a disruption of the sweet-taste perception in the brain could have a key role in the development of cognitive dysfunction. To assess the importance of the sweet-taste receptors in the brain, we conducted transcriptomic and proteomic analyses of cortical and hippocampal tissues isolated from T1R3 knock-out (T1R3KO) mice. The effect of an impaired sweet-taste perception system on cognition functions were examined by analyzing synaptic integrity and performing animal behavior on T1R3KO mice. Although T1R3KO mice did not present a metabolically disrupted phenotype, bioinformatic interpretation of the high-dimensionality data indicated a strong neurodegenerative signature associated with significant alterations in pathways involved in neuritogenesis, dendritic growth, and synaptogenesis. Furthermore, a significantly reduced dendritic spine density was observed in T1R3KO mice together with alterations in learning and memory functions as well as sociability deficits. Taken together our data suggest that the sweet-taste receptor system plays an important neurotrophic role in the extralingual central nervous tissue that underpins synaptic function, memory acquisition, and social behavior. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  11. FEREBUS: Highly parallelized engine for kriging training.

    Science.gov (United States)

    Di Pasquale, Nicodemo; Bane, Michael; Davie, Stuart J; Popelier, Paul L A

    2016-11-05

    FFLUX is a novel force field based on quantum topological atoms, combining multipolar electrostatics with IQA intraatomic and interatomic energy terms. The program FEREBUS calculates the hyperparameters of models produced by the machine learning method kriging. Calculation of kriging hyperparameters (θ and p) requires the optimization of the concentrated log-likelihood L̂(θ,p). FEREBUS uses Particle Swarm Optimization (PSO) and Differential Evolution (DE) algorithms to find the maximum of L̂(θ,p). PSO and DE are two heuristic algorithms that each use a set of particles or vectors to explore the space in which L̂(θ,p) is defined, searching for the maximum. The log-likelihood is a computationally expensive function, which needs to be calculated several times during each optimization iteration. The cost scales quickly with the problem dimension and speed becomes critical in model generation. We present the strategy used to parallelize FEREBUS, and the optimization of L̂(θ,p) through PSO and DE. The code is parallelized in two ways. MPI parallelization distributes the particles or vectors among the different processes, whereas the OpenMP implementation takes care of the calculation of L̂(θ,p), which involves the calculation and inversion of a particular matrix, whose size increases quickly with the dimension of the problem. The run time shows a speed-up of 61 times going from single core to 90 cores with a saving, in one case, of ∼98% of the single core time. In fact, the parallelization scheme presented reduces computational time from 2871 s for a single core calculation, to 41 s for 90 cores calculation. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  12. On the properties of input-to-output transformations in neuronal networks.

    Science.gov (United States)

    Olypher, Andrey; Vaillant, Jean

    2016-06-01

    Information processing in neuronal networks in certain important cases can be considered as maps of binary vectors, where ones (spikes) and zeros (no spikes) of input neurons are transformed into spikes and no spikes of output neurons. A simple but fundamental characteristic of such a map is how it transforms distances between input vectors into distances between output vectors. We advanced earlier known results by finding an exact solution to this problem for McCulloch-Pitts neurons. The obtained explicit formulas allow for detailed analysis of how the network connectivity and neuronal excitability affect the transformation of distances in neurons. As an application, we explored a simple model of information processing in the hippocampus, a brain area critically implicated in learning and memory. We found network connectivity and neuronal excitability parameter values that optimize discrimination between similar and distinct inputs. A decrease of neuronal excitability, which in biological neurons may be associated with decreased inhibition, impaired the optimality of discrimination.

  13. Pacemaking Kisspeptin Neurons

    Science.gov (United States)

    Kelly, Martin J.; Zhang, Chunguang; Qiu, Jian; Rønnekleiv, Oline K.

    2013-01-01

    Kisspeptin (Kiss1) neurons are vital for reproduction. GnRH neurons express the kisspeptin receptor, GPR 54, and kisspeptins potently stimulate the release of GnRH by depolarising and inducing sustained action potential firing in GnRH neurons. As such Kiss1 neurons may be the pre-synaptic pacemaker neurons in the hypothalamic circuitry that controls reproduction. There are at least two different populations of Kiss1 neurons: one in the rostral periventricular area (RP3V) that is stimulated by oestrogens and the other in the arcuate nucleus that is inhibited by oestrogens. How each of these Kiss1 neuronal populations participate in the regulation of the reproductive cycle is currently under intense investigation. Based on electrophysiological studies in the guinea pig and mouse, Kiss1 neurons in general are capable of generating burst firing behavior. Essentially all Kiss1 neurons, which have been studied thus far in the arcuate nucleus, express the ion channels necessary for burst firing, which include hyperpolarization-activated, cyclic nucleotide gated cation (HCN) channels and the T-type calcium (Cav3.1) channels. Under voltage clamp conditions, these channels produce distinct currents that under current clamp conditions can generate burst firing behavior. The future challenge is to identify other key channels and synaptic inputs involved in the regulation of the firing properties of Kiss1 neurons and the physiological regulation of the expression of these channels and receptors by oestrogens and other hormones. The ultimate goal is to understand how Kiss1 neurons control the different phases of GnRH neurosecretion and hence reproduction. PMID:23884368

  14. Parallel programming characteristics of a DSP-based parallel system

    Institute of Scientific and Technical Information of China (English)

    GAO Shu; GUO Qing-ping

    2006-01-01

    This paper firstly introduces the structure and working principle of DSP-based parallel system, parallel accelerating board and SHARC DSP chip. Then it pays attention to investigating the system's programming characteristics, especially the mode of communication, discussing how to design parallel algorithms and presenting a domain-decomposition-based complete multi-grid parallel algorithm with virtual boundary forecast (VBF) to solve a lot of large-scale and complicated heat problems. In the end, Mandelbrot Set and a non-linear heat transfer equation of ceramic/metal composite material are taken as examples to illustrate the implementation of the proposed algorithm. The results showed that the solutions are highly efficient and have linear speedup.

  15. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  16. Neuronal Reward and Decision Signals: From Theories to Data.

    Science.gov (United States)

    Schultz, Wolfram

    2015-07-01

    Rewards are crucial objects that induce learning, approach behavior, choices, and emotions. Whereas emotions are difficult to investigate in animals, the learning function is mediated by neuronal reward prediction error signals which implement basic constructs of reinforcement learning theory. These signals are found in dopamine neurons, which emit a global reward signal to striatum and frontal cortex, and in specific neurons in striatum, amygdala, and frontal cortex projecting to select neuronal populations. The approach and choice functions involve subjective value, which is objectively assessed by behavioral choices eliciting internal, subjective reward preferences. Utility is the formal mathematical characterization of subjective value and a prime decision variable in economic choice theory. It is coded as utility prediction error by phasic dopamine responses. Utility can incorporate various influences, including risk, delay, effort, and social interaction. Appropriate for formal decision mechanisms, rewards are coded as object value, action value, difference value, and chosen value by specific neurons. Although all reward, reinforcement, and decision variables are theoretical constructs, their neuronal signals constitute measurable physical implementations and as such confirm the validity of these concepts. The neuronal reward signals provide guidance for behavior while constraining the free will to act.

  17. Neuronal Reward and Decision Signals: From Theories to Data

    Science.gov (United States)

    Schultz, Wolfram

    2015-01-01

    Rewards are crucial objects that induce learning, approach behavior, choices, and emotions. Whereas emotions are difficult to investigate in animals, the learning function is mediated by neuronal reward prediction error signals which implement basic constructs of reinforcement learning theory. These signals are found in dopamine neurons, which emit a global reward signal to striatum and frontal cortex, and in specific neurons in striatum, amygdala, and frontal cortex projecting to select neuronal populations. The approach and choice functions involve subjective value, which is objectively assessed by behavioral choices eliciting internal, subjective reward preferences. Utility is the formal mathematical characterization of subjective value and a prime decision variable in economic choice theory. It is coded as utility prediction error by phasic dopamine responses. Utility can incorporate various influences, including risk, delay, effort, and social interaction. Appropriate for formal decision mechanisms, rewards are coded as object value, action value, difference value, and chosen value by specific neurons. Although all reward, reinforcement, and decision variables are theoretical constructs, their neuronal signals constitute measurable physical implementations and as such confirm the validity of these concepts. The neuronal reward signals provide guidance for behavior while constraining the free will to act. PMID:26109341

  18. Massively parallel recording of unit and local field potentials with silicon-based electrodes.

    Science.gov (United States)

    Csicsvari, Jozsef; Henze, Darrell A; Jamieson, Brian; Harris, Kenneth D; Sirota, Anton; Barthó, Péter; Wise, Kensall D; Buzsáki, György

    2003-08-01

    Parallel recording of neuronal activity in the behaving animal is a prerequisite for our understanding of neuronal representation and storage of information. Here we describe the development of micro-machined silicon microelectrode arrays for unit and local field recordings. The two-dimensional probes with 96 or 64 recording sites provided high-density recording of unit and field activity with minimal tissue displacement or damage. The on-chip active circuit eliminated movement and other artifacts and greatly reduced the weight of the headgear. The precise geometry of the recording tips allowed for the estimation of the spatial location of the recorded neurons and for high-resolution estimation of extracellular current source density. Action potentials could be simultaneously recorded from the soma and dendrites of the same neurons. Silicon technology is a promising approach for high-density, high-resolution sampling of neuronal activity in both basic research and prosthetic devices.

  19. Interactive Parallel and Distributed Processing

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Pagliarini, Luigi

    2010-01-01

    We present the concept of interactive parallel and distributed processing, and the challenges that programmers face in designing interactive parallel and distributed systems. Specifically, we introduce the challenges that are met and the decisions that need to be taken with respect...... to distributedness, master dependency, software behavioural models, adaptive interactivity, feedback, connectivity, topology, island modeling, and user interaction. We introduce the system of modular interactive tiles as a tool for easy, fast, and flexible exploration of these issues, and through examples show how...... to implement interactive parallel and distributed processing with different software behavioural models such as open loop, randomness based, rule based, user interaction based, AI and ALife based software....

  20. Corticospinal mirror neurons.

    Science.gov (United States)

    Kraskov, A; Philipp, R; Waldert, S; Vigneswaran, G; Quallo, M M; Lemon, R N

    2014-01-01

    Here, we report the properties of neurons with mirror-like characteristics that were identified as pyramidal tract neurons (PTNs) and recorded in the ventral premotor cortex (area F5) and primary motor cortex (M1) of three macaque monkeys. We analysed the neurons' discharge while the monkeys performed active grasp of either food or an object, and also while they observed an experimenter carrying out a similar range of grasps. A considerable proportion of tested PTNs showed clear mirror-like properties (52% F5 and 58% M1). Some PTNs exhibited 'classical' mirror neuron properties, increasing activity for both execution and observation, while others decreased their discharge during observation ('suppression mirror-neurons'). These experiments not only demonstrate the existence of PTNs as mirror neurons in M1, but also reveal some interesting differences between M1 and F5 mirror PTNs. Although observation-related changes in the discharge of PTNs must reach the spinal cord and will include some direct projections to motoneurons supplying grasping muscles, there was no EMG activity in these muscles during action observation. We suggest that the mirror neuron system is involved in the withholding of unwanted movement during action observation. Mirror neurons are differentially recruited in the behaviour that switches rapidly between making your own movements and observing those of others.

  1. Culturing rat hippocampal neurons.

    Science.gov (United States)

    Audesirk, G; Audesirk, T; Ferguson, C

    2001-01-01

    Cultured neurons are widely used to investigate the mechanisms of neurotoxicity. Embryonic rat hippocampal neurons may be grown as described under a wide variety of conditions to suit differing experimental procedures, including electrophysiology, morphological analysis of neurite development, and various biochemical and molecular analyses.

  2. Imaging calcium in neurons.

    Science.gov (United States)

    Grienberger, Christine; Konnerth, Arthur

    2012-03-08

    Calcium ions generate versatile intracellular signals that control key functions in all types of neurons. Imaging calcium in neurons is particularly important because calcium signals exert their highly specific functions in well-defined cellular subcompartments. In this Primer, we briefly review the general mechanisms of neuronal calcium signaling. We then introduce the calcium imaging devices, including confocal and two-photon microscopy as well as miniaturized devices that are used in freely moving animals. We provide an overview of the classical chemical fluorescent calcium indicators and of the protein-based genetically encoded calcium indicators. Using application examples, we introduce new developments in the field, such as calcium imaging in awake, behaving animals and the use of calcium imaging for mapping single spine sensory inputs in cortical neurons in vivo. We conclude by providing an outlook on the prospects of calcium imaging for the analysis of neuronal signaling and plasticity in various animal models.

  3. NEURON and Python

    Directory of Open Access Journals (Sweden)

    Michael Hines

    2009-01-01

    Full Text Available The NEURON simulation program now allows Python to be used, alone or in combination with NEURON's traditional Hoc interpreter. Adding Python to NEURON has the immediate benefit of making available a very extensive suite of analysis tools written for engineering and science. It also catalyzes NEURON software development by offering users a modern programming tool that is recognized for its flexibility and power to create and maintain complex programs. At the same time, nothing is lost because all existing models written in Hoc, including GUI tools, continue to work without change and are also available within the Python context. An example of the benefits of Python availability is the use of the XML module in implementing NEURON's Import3D and CellBuild tools to read MorphML and NeuroML model specifications.

  4. NEURON and Python.

    Science.gov (United States)

    Hines, Michael L; Davison, Andrew P; Muller, Eilif

    2009-01-01

    The NEURON simulation program now allows Python to be used, alone or in combination with NEURON's traditional Hoc interpreter. Adding Python to NEURON has the immediate benefit of making available a very extensive suite of analysis tools written for engineering and science. It also catalyzes NEURON software development by offering users a modern programming tool that is recognized for its flexibility and power to create and maintain complex programs. At the same time, nothing is lost because all existing models written in Hoc, including graphical user interface tools, continue to work without change and are also available within the Python context. An example of the benefits of Python availability is the use of the xml module in implementing NEURON's Import3D and CellBuild tools to read MorphML and NeuroML model specifications.

  5. Vibrotactile coding capacities of spinocervical tract neurons in the cat.

    Science.gov (United States)

    Sahai, V; Mahns, D A; Perkins, N M; Robinson, L; Rowe, M J

    2006-03-01

    The response characteristics and tactile coding capacities of individual dorsal horn neurons, in particular, those of the spinocervical tract (SCT), have been examined in the anesthetized cat. Twenty one of 38 neurons studied were confirmed SCT neurons based on antidromic activation procedures. All had tactile receptive fields on the hairy skin of the hindlimb. Most (29/38) could also be activated transynaptically by electrical stimulation of the cervical dorsal columns, suggesting that a common set of tactile primary afferent fibers may provide the input for both the dorsal column-lemniscal pathway and for parallel ascending pathways, such as the SCT. All but 3 of the 38 neurons studied displayed a pure dynamic sensitivity to controlled tactile stimuli but were unable to sustain their responsiveness throughout 1s trains of vibration at vibration frequencies exceeding 5-10 Hz. Stimulus-response relations revealed a very limited capacity of individual SCT neurons to signal, in a graded way, the intensity parameter of the vibrotactile stimulus. Furthermore, because of their inability to respond on a cycle-by-cycle pattern at vibration frequencies >5-10 Hz, these neurons were unable to provide any useful signal of vibration frequency beyond the very narrow bandwidth of approximately 5-10 Hz. Similar limitations were observed in the responsiveness of these neurons to repetitive forms of antidromic and transynaptic inputs generated by electrical stimulation of the spinal cord. In summary, the observed limitations on the vibrotactile bandwidth of SCT neurons and on the precision and fidelity of their temporal signaling, suggest that SCT neurons could serve as little more than coarse event detectors in tactile sensibility, in contrast to DCN neurons the bandwidth of vibrotactile responsiveness of which may extend beyond 400 Hz and is therefore broader by approximately 40-50 times than that of SCT neurons.

  6. Targeting neuronal populations of the striatum

    Directory of Open Access Journals (Sweden)

    Pierre F Durieux

    2011-07-01

    Full Text Available The striatum is critically involved in motor and motivational functions. The dorsal striatum, caudate-putamen, is primarily implicated in motor control and the learning of habits and skills, whereas the ventral striatum, the nucleus accumbens (NAc, is essential for motivation and drug reinforcement. The GABA medium-sized spiny neurons (MSNs, about 95% of striatal neurons, which are targets of the cerebral cortex and the midbrain dopaminergic neurons, form two pathways. The dopamine D1 receptor–positive (D1R striatonigral MSNs project to the medial globus pallidus and substantia nigra pars reticulata (direct pathway and co-express D1R and substance P, whereas dopamine D2 receptor–positive (D2R striatopallidal MSNs project to the lateral globus pallidus (indirect pathway and co-express D2R, adenosine A2A receptor (A2AR and enkephalin (Enk. The specific role of the two efferent pathways in motor and motivational control remained poorly understood until recently. Indeed, D1R striatonigral and D2R striatopallidal neurons, are intermingled and morphologically indistinguishable, and, hence, cannot be functionally dissociated with techniques such as chemical lesions or surgery.In view of the still debated respective functions of projection D2R-striatopallidal and D1R-striatonigral neurons and striatal interneurons, both in motor control and learning but also in more cognitive processes such as motivation, the present review sum up the development of new models and techniques (BAC transgenesis, optogenetic, viral transgenesis allowing the selective targeting of these striatal neuronal populations in adult animal brain to understand their specific roles.

  7. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  8. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  9. Live Migration Of Parallel Applications

    OpenAIRE

    Romero, Raul Fabian

    2010-01-01

    Romero, Raul F. M.S., Purdue University, August, 2010. Live Migration of Parallel Applications. Major Professor: Thomas J. Hacker. It has been observed on engineering and scientific data centers that the absence of a clear separation between software and hardware can severely affect parallel applications. Applications that run across several nodes tend to be greatly affected because a single computational failure present in one of the nodes often leads the entire application to produce ...

  10. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  11. The Parallel Curriculum: A Design To Develop High Potential and Challenge High-Ability Learners.

    Science.gov (United States)

    Tomlinson, Carol Ann; Kaplan, Sandra N.; Renzulli, Joseph S.; Purcell, Jeanne; Leppien, Jann; Burns, Deborah

    This book presents a model of curriculum development for gifted students and offers four parallel approaches that focus on ascending intellectual demand as students develop expertise in learning. The parallel curriculum's four approaches include: (1) the core or basic curriculum; (2) the curriculum of connections, which expands on the core…

  12. Writing parallel programs that work

    CERN Document Server

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  13. An FPGA-based silicon neuronal network with selectable excitability silicon neurons

    Directory of Open Access Journals (Sweden)

    Jing eLi

    2012-12-01

    Full Text Available This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN and the transmitter release based silicon synapse, allow the network to show rich dynamic behaviors and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with $256$ full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs.

  14. Democratic population decisions result in robust policy-gradient learning: a parametric study with GPU simulations.

    Directory of Open Access Journals (Sweden)

    Paul Richmond

    Full Text Available High performance computing on the Graphics Processing Unit (GPU is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism, achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic" for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated.

  15. Democratic population decisions result in robust policy-gradient learning: a parametric study with GPU simulations.

    Science.gov (United States)

    Richmond, Paul; Buesing, Lars; Giugliano, Michele; Vasilaki, Eleni

    2011-05-04

    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated.

  16. SIMPEL: Circuit model for photonic spike processing laser neurons

    CERN Document Server

    Shastri, Bhavin J; Tait, Alexander N; Wu, Ben; Prucnal, Paul R

    2014-01-01

    We propose an equivalent circuit model for photonic spike processing laser neurons with an embedded saturable absorber---a simulation model for photonic excitable lasers (SIMPEL). We show that by mapping the laser neuron rate equations into a circuit model, SPICE analysis can be used as an efficient and accurate engine for numerical calculations, capable of generalization to a variety of different laser neuron types found in literature. The development of this model parallels the Hodgkin--Huxley model of neuron biophysics, a circuit framework which brought efficiency, modularity, and generalizability to the study of neural dynamics. We employ the model to study various signal-processing effects such as excitability with excitatory and inhibitory pulses, binary all-or-nothing response, and bistable dynamics.

  17. Enhancing depression mechanisms in midbrain dopamine neurons achieves homeostatic resilience.

    Science.gov (United States)

    Friedman, Allyson K; Walsh, Jessica J; Juarez, Barbara; Ku, Stacy M; Chaudhury, Dipesh; Wang, Jing; Li, Xianting; Dietz, David M; Pan, Nina; Vialou, Vincent F; Neve, Rachael L; Yue, Zhenyu; Han, Ming-Hu

    2014-04-18

    Typical therapies try to reverse pathogenic mechanisms. Here, we describe treatment effects achieved by enhancing depression-causing mechanisms in ventral tegmental area (VTA) dopamine (DA) neurons. In a social defeat stress model of depression, depressed (susceptible) mice display hyperactivity of VTA DA neurons, caused by an up-regulated hyperpolarization-activated current (I(h)). Mice resilient to social defeat stress, however, exhibit stable normal firing of these neurons. Unexpectedly, resilient mice had an even larger I(h), which was observed in parallel with increased potassium (K(+)) channel currents. Experimentally further enhancing Ih or optogenetically increasing the hyperactivity of VTA DA neurons in susceptible mice completely reversed depression-related behaviors, an antidepressant effect achieved through resilience-like, projection-specific homeostatic plasticity. These results indicate a potential therapeutic path of promoting natural resilience for depression treatment.

  18. Reliable neuronal systems: the importance of heterogeneity.

    Directory of Open Access Journals (Sweden)

    Johannes Lengler

    Full Text Available For every engineer it goes without saying: in order to build a reliable system we need components that consistently behave precisely as they should. It is also well known that neurons, the building blocks of brains, do not satisfy this constraint. Even neurons of the same type come with huge variances in their properties and these properties also vary over time. Synapses, the connections between neurons, are highly unreliable in forwarding signals. In this paper we argue that both these fact add variance to neuronal processes, and that this variance is not a handicap of neural systems, but that instead predictable and reliable functional behavior of neural systems depends crucially on this variability. In particular, we show that higher variance allows a recurrently connected neural population to react more sensitively to incoming signals, and processes them faster and more energy efficient. This, for example, challenges the general assumption that the intrinsic variability of neurons in the brain is a defect that has to be overcome by synaptic plasticity in the process of learning.

  19. Signals and Circuits in the Purkinje Neuron

    Directory of Open Access Journals (Sweden)

    Ze'ev R Abrams

    2011-09-01

    Full Text Available Purkinje neurons in the cerebellum have over 100,000 inputs organized in an orthogonal geometry, and a single output channel. As the sole output of the cerebellar cortex layer, their complex firing pattern has been associated with motor control and learning. As such they have been extensively modeled and measured using tools ranging from electrophysiology and neuroanatomy, to dynamic systems and artificial intelligence methods. However, there is an alternative approach to analyze and describe the neuronal output of these cells using concepts from Electrical Engineering, particularly signal processing and digital/analog circuits. By viewing the Purkinje neuron as an unknown circuit to be reverse-engineered, we can use the tools that provide the foundations of today’s integrated circuits and communication systems to analyze the Purkinje system at the circuit level. We use Fourier transforms to analyze and isolate the inherent frequency modes in the Purkinje neuron and define 3 unique frequency ranges associated with the cells’ output. Comparing the Purkinje neuron to a signal generator that can be externally modulated adds an entire level of complexity to the functional role of these neurons both in terms of data analysis and information processing, relying on Fourier analysis methods in place of statistical ones. We also re-describe some of the recent literature in the field, using the nomenclature of signal processing. Furthermore, by comparing the experimental data of the past decade with basic electronic circuitry, we can resolve the outstanding controversy in the field, by recognizing that the Purkinje neuron can act as a multivibrator circuit.

  20. RNA-seq analysis of Drosophila clock and non-clock neurons reveals neuron-specific cycling and novel candidate neuropeptides.

    Science.gov (United States)

    Abruzzi, Katharine C; Zadina, Abigail; Luo, Weifei; Wiyanto, Evelyn; Rahman, Reazur; Guo, Fang; Shafer, Orie; Rosbash, Michael

    2017-02-01

    Locomotor activity rhythms are controlled by a network of ~150 circadian neurons within the adult Drosophila brain. They are subdivided based on their anatomical locations and properties. We profiled transcripts "around the clock" from three key groups of circadian neurons with different functions. We also profiled a non-circadian outgroup, dopaminergic (TH) neurons. They have cycling transcripts but fewer than clock neurons as well as low expression and poor cycling of clock gene transcripts. This suggests that TH neurons do not have a canonical circadian clock and that their gene expression cycling is driven by brain systemic cues. The three circadian groups are surprisingly diverse in their cycling transcripts and overall gene expression patterns, which include known and putative novel neuropeptides. Even the overall phase distributions of cycling transcripts are distinct, indicating that different regulatory principles govern transcript oscillations. This surprising cell-type diversity parallels the functional heterogeneity of the different neurons.