WorldWideScience

Sample records for neurons parallels learning

  1. Neuronal representations of stimulus associations develop in the temporal lobe during learning.

    Science.gov (United States)

    Messinger, A; Squire, L R; Zola, S M; Albright, T D

    2001-10-09

    Visual stimuli that are frequently seen together become associated in long-term memory, such that the sight of one stimulus readily brings to mind the thought or image of the other. It has been hypothesized that acquisition of such long-term associative memories proceeds via the strengthening of connections between neurons representing the associated stimuli, such that a neuron initially responding only to one stimulus of an associated pair eventually comes to respond to both. Consistent with this hypothesis, studies have demonstrated that individual neurons in the primate inferior temporal cortex tend to exhibit similar responses to pairs of visual stimuli that have become behaviorally associated. In the present study, we investigated the role of these areas in the formation of conditional visual associations by monitoring the responses of individual neurons during the learning of new stimulus pairs. We found that many neurons in both area TE and perirhinal cortex came to elicit more similar neuronal responses to paired stimuli as learning proceeded. Moreover, these neuronal response changes were learning-dependent and proceeded with an average time course that paralleled learning. This experience-dependent plasticity of sensory representations in the cerebral cortex may underlie the learning of associations between objects.

  2. Neuronal avalanches and learning

    Energy Technology Data Exchange (ETDEWEB)

    Arcangelis, Lucilla de, E-mail: dearcangelis@na.infn.it [Department of Information Engineering and CNISM, Second University of Naples, 81031 Aversa (Italy)

    2011-05-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  3. Neuronal avalanches and learning

    International Nuclear Information System (INIS)

    Arcangelis, Lucilla de

    2011-01-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  4. Parallel Stochastic discrete event simulation of calcium dynamics in neuron.

    Science.gov (United States)

    Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W

    2017-09-26

    The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.

  5. Parallel, but Dissociable, Processing in Discrete Corticostriatal Inputs Encodes Skill Learning.

    Science.gov (United States)

    Kupferschmidt, David A; Juczewski, Konrad; Cui, Guohong; Johnson, Kari A; Lovinger, David M

    2017-10-11

    Changes in cortical and striatal function underlie the transition from novel actions to refined motor skills. How discrete, anatomically defined corticostriatal projections function in vivo to encode skill learning remains unclear. Using novel fiber photometry approaches to assess real-time activity of associative inputs from medial prefrontal cortex to dorsomedial striatum and sensorimotor inputs from motor cortex to dorsolateral striatum, we show that associative and sensorimotor inputs co-engage early in action learning and disengage in a dissociable manner as actions are refined. Disengagement of associative, but not sensorimotor, inputs predicts individual differences in subsequent skill learning. Divergent somatic and presynaptic engagement in both projections during early action learning suggests potential learning-related in vivo modulation of presynaptic corticostriatal function. These findings reveal parallel processing within associative and sensorimotor circuits that challenges and refines existing views of corticostriatal function and expose neuronal projection- and compartment-specific activity dynamics that encode and predict action learning. Published by Elsevier Inc.

  6. A new supervised learning algorithm for spiking neurons.

    Science.gov (United States)

    Xu, Yan; Zeng, Xiaoqin; Zhong, Shuiming

    2013-06-01

    The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.

  7. Parallel expression of synaptophysin and evoked neurotransmitter release during development of cultured neurons

    DEFF Research Database (Denmark)

    Ehrhart-Bornstein, M; Treiman, M; Hansen, Gert Helge

    1991-01-01

    Primary cultures of GABAergic cerebral cortex neurons and glutamatergic cerebellar granule cells were used to study the expression of synaptophysin, a synaptic vesicle marker protein, along with the ability of each cell type to release neurotransmitter upon stimulation. The synaptophysin expression...... by quantitative immunoblotting and light microscope immunocytochemistry, respectively. In both cell types, a close parallelism was found between the temporal pattern of development in synaptophysin expression and neurotransmitter release. This temporal pattern differed between the two types of neurons....... The cerebral cortex neurons showed a biphasic time course of increase in synaptophysin content, paralleled by a biphasic pattern of development in their ability to release [3H]GABA in response to depolarization by glutamate or elevated K+ concentrations. In contrast, a monophasic, approximately linear increase...

  8. Parallel optical control of spatiotemporal neuronal spike activity using high-frequency digital light processingtechnology

    Directory of Open Access Journals (Sweden)

    Jason eJerome

    2011-08-01

    Full Text Available Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses Digital-Light-Processing (DLP technology to generate 2-dimensional (2D stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 µm and temporal (>13kHz resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 x 2.07 mm2 of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.

  9. Neuronal Rac1 Is Required for Learning-Evoked Neurogenesis

    Science.gov (United States)

    Anderson, Matthew P.; Freewoman, Julia; Cord, Branden; Babu, Harish; Brakebusch, Cord

    2013-01-01

    Hippocampus-dependent learning and memory relies on synaptic plasticity as well as network adaptations provided by the addition of adult-born neurons. We have previously shown that activity-induced intracellular signaling through the Rho family small GTPase Rac1 is necessary in forebrain projection neurons for normal synaptic plasticity in vivo, and here we show that selective loss of neuronal Rac1 also impairs the learning-evoked increase in neurogenesis in the adult mouse hippocampus. Earlier work has indicated that experience elevates the abundance of adult-born neurons in the hippocampus primarily by enhancing the survival of neurons produced just before the learning event. Loss of Rac1 in mature projection neurons did reduce learning-evoked neurogenesis but, contrary to our expectations, these effects were not mediated by altering the survival of young neurons in the hippocampus. Instead, loss of neuronal Rac1 activation selectively impaired a learning-evoked increase in the proliferation and accumulation of neural precursors generated during the learning event itself. This indicates that experience-induced alterations in neurogenesis can be mechanistically resolved into two effects: (1) the well documented but Rac1-independent signaling cascade that enhances the survival of young postmitotic neurons; and (2) a previously unrecognized Rac1-dependent signaling cascade that stimulates the proliferative production and retention of new neurons generated during learning itself. PMID:23884931

  10. Learning of time series through neuron-to-neuron instruction

    Energy Technology Data Exchange (ETDEWEB)

    Miyazaki, Y [Department of Physics, Kyoto University, Kyoto 606-8502, (Japan); Kinzel, W [Institut fuer Theoretische Physik, Universitaet Wurzburg, 97074 Wurzburg (Germany); Shinomoto, S [Department of Physics, Kyoto University, Kyoto (Japan)

    2003-02-07

    A model neuron with delayline feedback connections can learn a time series generated by another model neuron. It has been known that some student neurons that have completed such learning under the instruction of a teacher's quasi-periodic sequence mimic the teacher's time series over a long interval, even after instruction has ceased. We found that in addition to such faithful students, there are unfaithful students whose time series eventually diverge exponentially from that of the teacher. In order to understand the circumstances that allow for such a variety of students, the orbit dimension was estimated numerically. The quasi-periodic orbits in question were found to be confined in spaces with dimensions significantly smaller than that of the full phase space.

  11. Learning of time series through neuron-to-neuron instruction

    International Nuclear Information System (INIS)

    Miyazaki, Y; Kinzel, W; Shinomoto, S

    2003-01-01

    A model neuron with delayline feedback connections can learn a time series generated by another model neuron. It has been known that some student neurons that have completed such learning under the instruction of a teacher's quasi-periodic sequence mimic the teacher's time series over a long interval, even after instruction has ceased. We found that in addition to such faithful students, there are unfaithful students whose time series eventually diverge exponentially from that of the teacher. In order to understand the circumstances that allow for such a variety of students, the orbit dimension was estimated numerically. The quasi-periodic orbits in question were found to be confined in spaces with dimensions significantly smaller than that of the full phase space

  12. Learning Recruits Neurons Representing Previously Established Associations in the Corvid Endbrain.

    Science.gov (United States)

    Veit, Lena; Pidpruzhnykova, Galyna; Nieder, Andreas

    2017-10-01

    Crows quickly learn arbitrary associations. As a neuronal correlate of this behavior, single neurons in the corvid endbrain area nidopallium caudolaterale (NCL) change their response properties during association learning. In crows performing a delayed association task that required them to map both familiar and novel sample pictures to the same two choice pictures, NCL neurons established a common, prospective code for associations. Here, we report that neuronal tuning changes during learning were not distributed equally in the recorded population of NCL neurons. Instead, such learning-related changes relied almost exclusively on neurons which were already encoding familiar associations. Only in such neurons did behavioral improvements during learning of novel associations coincide with increasing selectivity over the learning process. The size and direction of selectivity for familiar and newly learned associations were highly correlated. These increases in selectivity for novel associations occurred only late in the delay period. Moreover, NCL neurons discriminated correct from erroneous trial outcome based on feedback signals at the end of the trial, particularly in newly learned associations. Our results indicate that task-relevant changes during association learning are not distributed within the population of corvid NCL neurons but rather are restricted to a specific group of association-selective neurons. Such association neurons in the multimodal cognitive integration area NCL likely play an important role during highly flexible behavior in corvids.

  13. Neuronal integration of dynamic sources: Bayesian learning and Bayesian inference.

    Science.gov (United States)

    Siegelmann, Hava T; Holzman, Lars E

    2010-09-01

    One of the brain's most basic functions is integrating sensory data from diverse sources. This ability causes us to question whether the neural system is computationally capable of intelligently integrating data, not only when sources have known, fixed relative dependencies but also when it must determine such relative weightings based on dynamic conditions, and then use these learned weightings to accurately infer information about the world. We suggest that the brain is, in fact, fully capable of computing this parallel task in a single network and describe a neural inspired circuit with this property. Our implementation suggests the possibility that evidence learning requires a more complex organization of the network than was previously assumed, where neurons have different specialties, whose emergence brings the desired adaptivity seen in human online inference.

  14. A causal link between prediction errors, dopamine neurons and learning.

    Science.gov (United States)

    Steinberg, Elizabeth E; Keiflin, Ronald; Boivin, Josiah R; Witten, Ilana B; Deisseroth, Karl; Janak, Patricia H

    2013-07-01

    Situations in which rewards are unexpectedly obtained or withheld represent opportunities for new learning. Often, this learning includes identifying cues that predict reward availability. Unexpected rewards strongly activate midbrain dopamine neurons. This phasic signal is proposed to support learning about antecedent cues by signaling discrepancies between actual and expected outcomes, termed a reward prediction error. However, it is unknown whether dopamine neuron prediction error signaling and cue-reward learning are causally linked. To test this hypothesis, we manipulated dopamine neuron activity in rats in two behavioral procedures, associative blocking and extinction, that illustrate the essential function of prediction errors in learning. We observed that optogenetic activation of dopamine neurons concurrent with reward delivery, mimicking a prediction error, was sufficient to cause long-lasting increases in cue-elicited reward-seeking behavior. Our findings establish a causal role for temporally precise dopamine neuron signaling in cue-reward learning, bridging a critical gap between experimental evidence and influential theoretical frameworks.

  15. The specificity of learned parallelism in dual-memory retrieval.

    Science.gov (United States)

    Strobach, Tilo; Schubert, Torsten; Pashler, Harold; Rickard, Timothy

    2014-05-01

    Retrieval of two responses from one visually presented cue occurs sequentially at the outset of dual-retrieval practice. Exclusively for subjects who adopt a mode of grouping (i.e., synchronizing) their response execution, however, reaction times after dual-retrieval practice indicate a shift to learned retrieval parallelism (e.g., Nino & Rickard, in Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 373-388, 2003). In the present study, we investigated how this learned parallelism is achieved and why it appears to occur only for subjects who group their responses. Two main accounts were considered: a task-level versus a cue-level account. The task-level account assumes that learned retrieval parallelism occurs at the level of the task as a whole and is not limited to practiced cues. Grouping response execution may thus promote a general shift to parallel retrieval following practice. The cue-level account states that learned retrieval parallelism is specific to practiced cues. This type of parallelism may result from cue-specific response chunking that occurs uniquely as a consequence of grouped response execution. The results of two experiments favored the second account and were best interpreted in terms of a structural bottleneck model.

  16. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    Science.gov (United States)

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  17. Postnatal Gene Therapy Improves Spatial Learning Despite the Presence of Neuronal Ectopia in a Model of Neuronal Migration Disorder

    Directory of Open Access Journals (Sweden)

    Huaiyu Hu

    2016-11-01

    Full Text Available Patients with type II lissencephaly, a neuronal migration disorder with ectopic neurons, suffer from severe mental retardation, including learning deficits. There is no effective therapy to prevent or correct the formation of neuronal ectopia, which is presumed to cause cognitive deficits. We hypothesized that learning deficits were not solely caused by neuronal ectopia and that postnatal gene therapy could improve learning without correcting the neuronal ectopia formed during fetal development. To test this hypothesis, we evaluated spatial learning of cerebral cortex-specific protein O-mannosyltransferase 2 (POMT2, an enzyme required for O-mannosyl glycosylation knockout mice and compared to the knockout mice that were injected with an adeno-associated viral vector (AAV encoding POMT2 into the postnatal brains with Barnes maze. The data showed that the knockout mice exhibited reduced glycosylation in the cerebral cortex, reduced dendritic spine density on CA1 neurons, and increased latency to the target hole in the Barnes maze, indicating learning deficits. Postnatal gene therapy restored functional glycosylation, rescued dendritic spine defects, and improved performance on the Barnes maze by the knockout mice even though neuronal ectopia was not corrected. These results indicate that postnatal gene therapy improves spatial learning despite the presence of neuronal ectopia.

  18. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  19. The chronotron: a neuron that learns to fire temporally precise spike patterns.

    Directory of Open Access Journals (Sweden)

    Răzvan V Florian

    Full Text Available In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons, one that provides high memory capacity (E-learning, and one that has a higher biological plausibility (I-learning. With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

  20. The function of mirror neurons in the learning process

    Directory of Open Access Journals (Sweden)

    Mara Daniel

    2017-01-01

    Full Text Available In the last years, Neurosciences have developed very much, being elaborated many important theories scientific research in the field. The main goal of neuroscience is to understand how groups of neurons interact to create the behavior. Neuroscientists studying the action of molecules, genes and cells. It also explores the complex interactions involved in motion perception, thoughts, emotions and learning. Brick fundamental nervous system is the nerve cell, neuron. Neurons exchange information by sending electrical signals and chemical through connections called synapses. Discovered by a group of Italian researchers from the University of Parma, neurons - mirror are a special class of nerve cells played an important role in the direct knowledge, automatic and unconscious environment. These cortical neurons are activated not only when an action is fulfilled, but when we see how the same action is performed by someone else, they represent neural mechanism by which the actions, intentions and emotions of others can be understood automatically. In childhood neurons - mirror are extremely important. Thanks to them we learned a lot in the early years: smile, to ask for help and, in fact, all the behaviors and family and group norms. People learn by what they see and sense the others. Neurons - mirror are important to understanding the actions and intentions of other people and learn new skills through mirror image. They are involved in planning and controlling actions, abstract thinking and memory. If a child observes an action, neurons - mirror is activated and forming new neural pathways as if even he takes that action. Efficient activity of mirror neurons leads to good development in all areas at a higher emotional intelligence and the ability to empathize with others.

  1. An online supervised learning method based on gradient descent for spiking neurons.

    Science.gov (United States)

    Xu, Yan; Yang, Jing; Zhong, Shuiming

    2017-09-01

    The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by precise firing times of spikes. The gradient-descent-based (GDB) learning methods are widely used and verified in the current research. Although the existing GDB multi-spike learning (or spike sequence learning) methods have good performance, they work in an offline manner and still have some limitations. This paper proposes an online GDB spike sequence learning method for spiking neurons that is based on the online adjustment mechanism of real biological neuron synapses. The method constructs error function and calculates the adjustment of synaptic weights as soon as the neurons emit a spike during their running process. We analyze and synthesize desired and actual output spikes to select appropriate input spikes in the calculation of weight adjustment in this paper. The experimental results show that our method obviously improves learning performance compared with the offline learning manner and has certain advantage on learning accuracy compared with other learning methods. Stronger learning ability determines that the method has large pattern storage capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Mirror Neurons, Embodied Cognitive Agents and Imitation Learning

    OpenAIRE

    Wiedermann, Jiří

    2003-01-01

    Mirror neurons are a relatively recent discovery; it has been conjectured that these neurons play an important role in imitation learning and other cognitive phenomena. We will study a possible place and role of mirror neurons in the neural architecture of embodied cognitive agents. We will formulate and investigate the hypothesis that mirror neurons serve as a mechanism which coordinates the multimodal (i.e., motor, perceptional and proprioceptive) information and completes it so that the ag...

  3. Maximization of learning speed in the motor cortex due to neuronal redundancy.

    Directory of Open Access Journals (Sweden)

    Ken Takiyama

    2012-01-01

    Full Text Available Many redundancies play functional roles in motor control and motor learning. For example, kinematic and muscle redundancies contribute to stabilizing posture and impedance control, respectively. Another redundancy is the number of neurons themselves; there are overwhelmingly more neurons than muscles, and many combinations of neural activation can generate identical muscle activity. The functional roles of this neuronal redundancy remains unknown. Analysis of a redundant neural network model makes it possible to investigate these functional roles while varying the number of model neurons and holding constant the number of output units. Our analysis reveals that learning speed reaches its maximum value if and only if the model includes sufficient neuronal redundancy. This analytical result does not depend on whether the distribution of the preferred direction is uniform or a skewed bimodal, both of which have been reported in neurophysiological studies. Neuronal redundancy maximizes learning speed, even if the neural network model includes recurrent connections, a nonlinear activation function, or nonlinear muscle units. Furthermore, our results do not rely on the shape of the generalization function. The results of this study suggest that one of the functional roles of neuronal redundancy is to maximize learning speed.

  4. Supervised learning with decision margins in pools of spiking neurons.

    Science.gov (United States)

    Le Mouel, Charlotte; Harris, Kenneth D; Yger, Pierre

    2014-10-01

    Learning to categorise sensory inputs by generalising from a few examples whose category is precisely known is a crucial step for the brain to produce appropriate behavioural responses. At the neuronal level, this may be performed by adaptation of synaptic weights under the influence of a training signal, in order to group spiking patterns impinging on the neuron. Here we describe a framework that allows spiking neurons to perform such "supervised learning", using principles similar to the Support Vector Machine, a well-established and robust classifier. Using a hinge-loss error function, we show that requesting a margin similar to that of the SVM improves performance on linearly non-separable problems. Moreover, we show that using pools of neurons to discriminate categories can also increase the performance by sharing the load among neurons.

  5. Programmed to Learn? The Ontogeny of Mirror Neurons

    Science.gov (United States)

    Del Giudice, Marco; Manera, Valeria; Keysers, Christian

    2009-01-01

    Mirror neurons are increasingly recognized as a crucial substrate for many developmental processes, including imitation and social learning. Although there has been considerable progress in describing their function and localization in the primate and adult human brain, we still know little about their ontogeny. The idea that mirror neurons result…

  6. Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights.

    Science.gov (United States)

    Samadi, Arash; Lillicrap, Timothy P; Tweed, Douglas B

    2017-03-01

    Recent work in computer science has shown the power of deep learning driven by the backpropagation algorithm in networks of artificial neurons. But real neurons in the brain are different from most of these artificial ones in at least three crucial ways: they emit spikes rather than graded outputs, their inputs and outputs are related dynamically rather than by piecewise-smooth functions, and they have no known way to coordinate arrays of synapses in separate forward and feedback pathways so that they change simultaneously and identically, as they do in backpropagation. Given these differences, it is unlikely that current deep learning algorithms can operate in the brain, but we that show these problems can be solved by two simple devices: learning rules can approximate dynamic input-output relations with piecewise-smooth functions, and a variation on the feedback alignment algorithm can train deep networks without having to coordinate forward and feedback synapses. Our results also show that deep spiking networks learn much better if each neuron computes an intracellular teaching signal that reflects that cell's nonlinearity. With this mechanism, networks of spiking neurons show useful learning in synapses at least nine layers upstream from the output cells and perform well compared to other spiking networks in the literature on the MNIST digit recognition task.

  7. Spatial learning depends on both the addition and removal of new hippocampal neurons.

    Directory of Open Access Journals (Sweden)

    David Dupret

    2007-08-01

    Full Text Available The role of adult hippocampal neurogenesis in spatial learning remains a matter of debate. Here, we show that spatial learning modifies neurogenesis by inducing a cascade of events that resembles the selective stabilization process characterizing development. Learning promotes survival of relatively mature neurons, apoptosis of more immature cells, and finally, proliferation of neural precursors. These are three interrelated events mediating learning. Thus, blocking apoptosis impairs memory and inhibits learning-induced cell survival and cell proliferation. In conclusion, during learning, similar to the selective stabilization process, neuronal networks are sculpted by a tightly regulated selection and suppression of different populations of newly born neurons.

  8. Symbol manipulation and rule learning in spiking neuronal networks.

    Science.gov (United States)

    Fernando, Chrisantha

    2011-04-21

    It has been claimed that the productivity, systematicity and compositionality of human language and thought necessitate the existence of a physical symbol system (PSS) in the brain. Recent discoveries about temporal coding suggest a novel type of neuronal implementation of a physical symbol system. Furthermore, learning classifier systems provide a plausible algorithmic basis by which symbol re-write rules could be trained to undertake behaviors exhibiting systematicity and compositionality, using a kind of natural selection of re-write rules in the brain, We show how the core operation of a learning classifier system, namely, the replication with variation of symbol re-write rules, can be implemented using spike-time dependent plasticity based supervised learning. As a whole, the aim of this paper is to integrate an algorithmic and an implementation level description of a neuronal symbol system capable of sustaining systematic and compositional behaviors. Previously proposed neuronal implementations of symbolic representations are compared with this new proposal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Immature doublecortin-positive hippocampal neurons are important for learning but not for remembering.

    Science.gov (United States)

    Vukovic, Jana; Borlikova, Gilyana G; Ruitenberg, Marc J; Robinson, Gregory J; Sullivan, Robert K P; Walker, Tara L; Bartlett, Perry F

    2013-04-10

    It is now widely accepted that hippocampal neurogenesis underpins critical cognitive functions, such as learning and memory. To assess the behavioral importance of adult-born neurons, we developed a novel knock-in mouse model that allowed us to specifically and reversibly ablate hippocampal neurons at an immature stage. In these mice, the diphtheria toxin receptor (DTR) is expressed under control of the doublecortin (DCX) promoter, which allows for specific ablation of immature DCX-expressing neurons after administration of diphtheria toxin while leaving the neural precursor pool intact. Using a spatially challenging behavioral test (a modified version of the active place avoidance test), we present direct evidence that immature DCX-expressing neurons are required for successful acquisition of spatial learning, as well as reversal learning, but are not necessary for the retrieval of stored long-term memories. Importantly, the observed learning deficits were rescued as newly generated immature neurons repopulated the granule cell layer upon termination of the toxin treatment. Repeat (or cyclic) depletion of immature neurons reinstated behavioral deficits if the mice were challenged with a novel task. Together, these findings highlight the potential of stimulating neurogenesis as a means to enhance learning.

  10. The function of mirror neurons in the learning process

    OpenAIRE

    Mara Daniel

    2017-01-01

    In the last years, Neurosciences have developed very much, being elaborated many important theories scientific research in the field. The main goal of neuroscience is to understand how groups of neurons interact to create the behavior. Neuroscientists studying the action of molecules, genes and cells. It also explores the complex interactions involved in motion perception, thoughts, emotions and learning. Brick fundamental nervous system is the nerve cell, neuron. Neurons exchange information...

  11. Hebbian Learning is about contingency, not contiguity, and explains the emergence of predictive mirror neurons.

    Science.gov (United States)

    Keysers, Christian; Perrett, David I; Gazzola, Valeria

    2014-04-01

    Hebbian Learning should not be reduced to contiguity, as it detects contingency and causality. Hebbian Learning accounts of mirror neurons make predictions that differ from associative learning: Through Hebbian Learning, mirror neurons become dynamic networks that calculate predictions and prediction errors and relate to ideomotor theories. The social force of imitation is important for mirror neuron emergence and suggests canalization.

  12. Precise auditory-vocal mirroring in neurons for learned vocal communication.

    Science.gov (United States)

    Prather, J F; Peters, S; Nowicki, S; Mooney, R

    2008-01-17

    Brain mechanisms for communication must establish a correspondence between sensory and motor codes used to represent the signal. One idea is that this correspondence is established at the level of single neurons that are active when the individual performs a particular gesture or observes a similar gesture performed by another individual. Although neurons that display a precise auditory-vocal correspondence could facilitate vocal communication, they have yet to be identified. Here we report that a certain class of neurons in the swamp sparrow forebrain displays a precise auditory-vocal correspondence. We show that these neurons respond in a temporally precise fashion to auditory presentation of certain note sequences in this songbird's repertoire and to similar note sequences in other birds' songs. These neurons display nearly identical patterns of activity when the bird sings the same sequence, and disrupting auditory feedback does not alter this singing-related activity, indicating it is motor in nature. Furthermore, these neurons innervate striatal structures important for song learning, raising the possibility that singing-related activity in these cells is compared to auditory feedback to guide vocal learning.

  13. Hebbian learning and predictive mirror neurons for actions, sensations and emotions.

    Science.gov (United States)

    Keysers, Christian; Gazzola, Valeria

    2014-01-01

    Spike-timing-dependent plasticity is considered the neurophysiological basis of Hebbian learning and has been shown to be sensitive to both contingency and contiguity between pre- and postsynaptic activity. Here, we will examine how applying this Hebbian learning rule to a system of interconnected neurons in the presence of direct or indirect re-afference (e.g. seeing/hearing one's own actions) predicts the emergence of mirror neurons with predictive properties. In this framework, we analyse how mirror neurons become a dynamic system that performs active inferences about the actions of others and allows joint actions despite sensorimotor delays. We explore how this system performs a projection of the self onto others, with egocentric biases to contribute to mind-reading. Finally, we argue that Hebbian learning predicts mirror-like neurons for sensations and emotions and review evidence for the presence of such vicarious activations outside the motor system.

  14. Scaling up machine learning: parallel and distributed approaches

    National Research Council Canada - National Science Library

    Bekkerman, Ron; Bilenko, Mikhail; Langford, John

    2012-01-01

    ... presented in the book cover a range of parallelization platforms from FPGAs and GPUs to multi-core systems and commodity clusters; concurrent programming frameworks that include CUDA, MPI, MapReduce, and DryadLINQ; and various learning settings: supervised, unsupervised, semi-supervised, and online learning. Extensive coverage of parallelizat...

  15. Associative and sensorimotor learning for parenting involves mirror neurons under the influence of oxytocin.

    Science.gov (United States)

    Ho, S Shaun; Macdonald, Adam; Swain, James E

    2014-04-01

    Mirror neuron-based associative learning may be understood according to associative learning theories, in addition to sensorimotor learning theories. This is important for a comprehensive understanding of the role of mirror neurons and related hormone modulators, such as oxytocin, in complex social interactions such as among parent-infant dyads and in examples of mirror neuron function that involve abnormal motor systems such as depression.

  16. Parallel strategy for optimal learning in perceptrons

    International Nuclear Information System (INIS)

    Neirotti, J P

    2010-01-01

    We developed a parallel strategy for learning optimally specific realizable rules by perceptrons, in an online learning scenario. Our result is a generalization of the Caticha-Kinouchi (CK) algorithm developed for learning a perceptron with a synaptic vector drawn from a uniform distribution over the N-dimensional sphere, so called the typical case. Our method outperforms the CK algorithm in almost all possible situations, failing only in a denumerable set of cases. The algorithm is optimal in the sense that it saturates Bayesian bounds when it succeeds.

  17. Hebbian Learning is about contingency, not contiguity, and explains the emergence of predictive mirror neurons

    OpenAIRE

    Keysers, C.; Perrett, D.I.; Gazzola, V.

    2014-01-01

    Hebbian Learning should not be reduced to contiguity, as it detects contingency and causality. Hebbian Learning accounts of mirror neurons make predictions that differ from associative learning: Through Hebbian Learning, mirror neurons become dynamic networks that calculate predictions and prediction errors and relate to ideomotor theories. The social force of imitation is important for mirror neuron emergence and suggests canalization. Publisher PDF Peer reviewed

  18. Changes in prefrontal neuronal activity after learning to perform a spatial working memory task.

    Science.gov (United States)

    Qi, Xue-Lian; Meyer, Travis; Stanford, Terrence R; Constantinidis, Christos

    2011-12-01

    The prefrontal cortex is considered essential for learning to perform cognitive tasks though little is known about how the representation of stimulus properties is altered by learning. To address this issue, we recorded neuronal activity in monkeys before and after training on a task that required visual working memory. After the subjects learned to perform the task, we observed activation of more prefrontal neurons and increased activity during working memory maintenance. The working memory-related increase in firing rate was due mostly to regular-spiking putative pyramidal neurons. Unexpectedly, the selectivity of neurons for stimulus properties and the ability of neurons to discriminate between stimuli decreased as the information about stimulus properties was apparently present in neural firing prior to training and neuronal selectivity degraded after training in the task. The effect was robust and could not be accounted for by differences in sampling sites, selection of neurons, level of performance, or merely the elapse of time. The results indicate that, in contrast to the effects of perceptual learning, mastery of a cognitive task degrades the apparent stimulus selectivity as neurons represent more abstract information related to the task. This effect is countered by the recruitment of more neurons after training.

  19. Hebbian Learning is about contingency, not contiguity, and explains the emergence of predictive mirror neurons

    NARCIS (Netherlands)

    Keysers, C.; Perrett, David I; Gazzola, Valeria

    Hebbian Learning should not be reduced to contiguity, as it detects contingency and causality. Hebbian Learning accounts of mirror neurons make predictions that differ from associative learning: Through Hebbian Learning, mirror neurons become dynamic networks that calculate predictions and

  20. Parallel Volunteer Learning during Youth Programs

    Science.gov (United States)

    Lesmeister, Marilyn K.; Green, Jeremy; Derby, Amy; Bothum, Candi

    2012-01-01

    Lack of time is a hindrance for volunteers to participate in educational opportunities, yet volunteer success in an organization is tied to the orientation and education they receive. Meeting diverse educational needs of volunteers can be a challenge for program managers. Scheduling a Volunteer Learning Track for chaperones that is parallel to a…

  1. Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.

    Science.gov (United States)

    Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting

    2018-02-12

    Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.

  2. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  3. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.; Khan, Ayaz H.

    2017-01-01

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it's time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  4. The Languages of Neurons: An Analysis of Coding Mechanisms by Which Neurons Communicate, Learn and Store Information

    Directory of Open Access Journals (Sweden)

    Morris H. Baslow

    2009-11-01

    Full Text Available In this paper evidence is provided that individual neurons possess language, and that the basic unit for communication consists of two neurons and their entire field of interacting dendritic and synaptic connections. While information processing in the brain is highly complex, each neuron uses a simple mechanism for transmitting information. This is in the form of temporal electrophysiological action potentials or spikes (S operating on a millisecond timescale that, along with pauses (P between spikes constitute a two letter “alphabet” that generates meaningful frequency-encoded signals or neuronal S/P “words” in a primary language. However, when a word from an afferent neuron enters the dendritic-synaptic-dendritic field between two neurons, it is translated into a new frequency-encoded word with the same meaning, but in a different spike-pause language, that is delivered to and understood by the efferent neuron. It is suggested that this unidirectional inter-neuronal language-based word translation step is of utmost importance to brain function in that it allows for variations in meaning to occur. Thus, structural or biochemical changes in dendrites or synapses can produce novel words in the second language that have changed meanings, allowing for a specific signaling experience, either external or internal, to modify the meaning of an original word (learning, and store the learned information of that experience (memory in the form of an altered dendritic-synaptic-dendritic field.

  5. Mirror Neurons from Associative Learning

    OpenAIRE

    Catmur, Caroline; Press, Clare; Heyes, Cecilia

    2016-01-01

    Mirror neurons fire both when executing actions and observing others perform similar actions. Their sensorimotor matching properties have generally been considered a genetic adaptation for social cognition; however, in the present chapter we argue that the evidence in favor of this account is not compelling. Instead we present evidence supporting an alternative account: that mirror neurons’ matching properties arise from associative learning during individual development. Notably, this proces...

  6. CAMKII activation is not required for maintenance of learning-induced enhancement of neuronal excitability.

    Directory of Open Access Journals (Sweden)

    Ori Liraz

    Full Text Available Pyramidal neurons in the piriform cortex from olfactory-discrimination trained rats show enhanced intrinsic neuronal excitability that lasts for several days after learning. Such enhanced intrinsic excitability is mediated by long-term reduction in the post-burst after-hyperpolarization (AHP which is generated by repetitive spike firing. AHP reduction is due to decreased conductance of a calcium-dependent potassium current, the sI(AHP. We have previously shown that learning-induced AHP reduction is maintained by persistent protein kinase C (PKC and extracellular regulated kinase (ERK activation. However, the molecular machinery underlying this long-lasting modulation of intrinsic excitability is yet to be fully described. Here we examine whether the CaMKII, which is known to be crucial in learning, memory and synaptic plasticity processes, is instrumental for the maintenance of learning-induced AHP reduction. KN93, that selectively blocks CaMKII autophosphorylation at Thr286, reduced the AHP in neurons from trained and control rat to the same extent. Consequently, the differences in AHP amplitude and neuronal adaptation between neurons from trained rats and controls remained. Accordingly, the level of activated CaMKII was similar in pirifrom cortex samples taken form trained and control rats. Our data show that although CaMKII modulates the amplitude of AHP of pyramidal neurons in the piriform cortex, its activation is not required for maintaining learning-induced enhancement of neuronal excitability.

  7. Arc mRNA induction in striatal efferent neurons associated with response learning.

    Science.gov (United States)

    Daberkow, D P; Riedy, M D; Kesner, R P; Keefe, K A

    2007-07-01

    The dorsal striatum is involved in motor-response learning, but the extent to which distinct populations of striatal efferent neurons are differentially involved in such learning is unknown. Activity-regulated, cytoskeleton-associated (Arc) protein is an effector immediate-early gene implicated in synaptic plasticity. We examined arc mRNA expression in striatopallidal vs. striatonigral efferent neurons in dorsomedial and dorsolateral striatum of rats engaged in reversal learning on a T-maze motor-response task. Male Sprague-Dawley rats learned to turn right or left for 3 days. Half of the rats then underwent reversal training. The remaining rats were yoked to rats undergoing reversal training, such that they ran the same number of trials but ran them as continued-acquisition trials. Brains were removed and processed using double-label fluorescent in situ hybridization for arc and preproenkephalin (PPE) mRNA. In the reversal, but not the continued-acquisition, group there was a significant relation between the overall arc mRNA signal in dorsomedial striatum and the number of trials run, with rats reaching criterion in fewer trials having higher levels of arc mRNA expression. A similar relation was seen between the numbers of PPE(+) and PPE(-) neurons in dorsomedial striatum with cytoplasmic arc mRNA expression. Interestingly, in behaviourally activated animals significantly more PPE(-) neurons had cytoplasmic arc mRNA expression. These data suggest that Arc in both striatonigral and striatopallidal efferent neurons is involved in striatal synaptic plasticity mediating motor-response learning in the T-maze and that there is differential processing of arc mRNA in distinct subpopulations of striatal efferent neurons.

  8. Scaling up machine learning: parallel and distributed approaches

    National Research Council Canada - National Science Library

    Bekkerman, Ron; Bilenko, Mikhail; Langford, John

    2012-01-01

    .... Demand for parallelizing learning algorithms is highly task-specific: in some settings it is driven by the enormous dataset sizes, in others by model complexity or by real-time performance requirements...

  9. Synaptic potentiation onto habenula neurons in learned helplessness model of depression

    Science.gov (United States)

    Li, Bo; Piriz, Joaquin; Mirrione, Martine; Chung, ChiHye; Proulx, Christophe D.; Schulz, Daniela; Henn, Fritz; Malinow, Roberto

    2010-01-01

    The cellular basis of depressive disorders is poorly understood1. Recent studies in monkeys indicate that neurons in the lateral habenula (LHb), a nucleus that mediates communication between forebrain and midbrain structures, can increase their activity when an animal fails to receive an expected positive reward or receives a stimulus that predicts aversive conditions (i.e. disappointment or anticipation of a negative outcome)2, 3, 4. LHb neurons project to and modulate dopamine-rich regions such as the ventral-tegmental area (VTA)2, 5 that control reward-seeking behavior6 and participate in depressive disorders7. Here we show in two learned helplessness models of depression that excitatory synapses onto LHb neurons projecting to the VTA are potentiated. Synaptic potentiation correlates with an animal’s helplessness behavior and is due to an enhanced presynaptic release probability. Depleting transmitter release by repeated electrical stimulation of LHb afferents, using a protocol that can be effective on depressed patients8, 9, dramatically suppresses synaptic drive onto VTA-projecting LHb neurons in brain slices and can significantly reduce learned helplessness behavior in rats. Our results indicate that increased presynaptic action onto LHb neurons contributes to the rodent learned helplessness model of depression. PMID:21350486

  10. [Changes of the neuronal membrane excitability as cellular mechanisms of learning and memory].

    Science.gov (United States)

    Gaĭnutdinov, Kh L; Andrianov, V V; Gaĭnutdinova, T Kh

    2011-01-01

    In the presented review given literature and results of own studies of dynamics of electrical characteristics of neurons, which change are included in processes both an elaboration of learning, and retention of the long-term memory. Literary datas and our results allow to conclusion, that long-term retention of behavioural reactions during learning is accompanied not only by changing efficiency of synaptic transmission, as well as increasing of excitability of command neurons of the defensive reflex. This means, that in the process of learning are involved long-term changes of the characteristics a membrane of certain elements of neuronal network, dependent from the metabolism of the cells. see text). Thou phenomena possible mark as cellular (electrophysiological) correlates of long-term plastic modifications of the behaviour. The analyses of having results demonstrates an important role of membrane characteristics of neurons (their excitability) and parameters an synaptic transmission not only in initial stage of learning, as well as in long-term modifications of the behaviour (long-term memory).

  11. Hebbian learning and predictive mirror neurons for actions, sensations and emotions

    OpenAIRE

    Keysers, C.; Gazzola, Valeria

    2014-01-01

    Spike-timing-dependent plasticity is considered the neurophysiological basis of Hebbian learning and has been shown to be sensitive to both contingency and contiguity between pre- and postsynaptic activity. Here, we will examine how applying this Hebbian learning rule to a system of interconnected neurons in the presence of direct or indirect re-afference (e.g. seeing/hearing one's own actions) predicts the emergence of mirror neurons with predictive properties. In this framework, we analyse ...

  12. Synaptic potentiation onto habenula neurons in the learned helplessness model of depression.

    Science.gov (United States)

    Li, Bo; Piriz, Joaquin; Mirrione, Martine; Chung, ChiHye; Proulx, Christophe D; Schulz, Daniela; Henn, Fritz; Malinow, Roberto

    2011-02-24

    The cellular basis of depressive disorders is poorly understood. Recent studies in monkeys indicate that neurons in the lateral habenula (LHb), a nucleus that mediates communication between forebrain and midbrain structures, can increase their activity when an animal fails to receive an expected positive reward or receives a stimulus that predicts aversive conditions (that is, disappointment or anticipation of a negative outcome). LHb neurons project to, and modulate, dopamine-rich regions, such as the ventral tegmental area (VTA), that control reward-seeking behaviour and participate in depressive disorders. Here we show that in two learned helplessness models of depression, excitatory synapses onto LHb neurons projecting to the VTA are potentiated. Synaptic potentiation correlates with an animal's helplessness behaviour and is due to an enhanced presynaptic release probability. Depleting transmitter release by repeated electrical stimulation of LHb afferents, using a protocol that can be effective for patients who are depressed, markedly suppresses synaptic drive onto VTA-projecting LHb neurons in brain slices and can significantly reduce learned helplessness behaviour in rats. Our results indicate that increased presynaptic action onto LHb neurons contributes to the rodent learned helplessness model of depression.

  13. Synaptic potentiation onto habenula neurons in the learned helplessness model of depression

    International Nuclear Information System (INIS)

    Li, B.; Schulz, D.; Piriz, J.; Mirrione, M.; Chung, C.H.; Proulx, C.D.; Schulz, D.; Henn, F.; Malinow, R.

    2011-01-01

    The cellular basis of depressive disorders is poorly understood. Recent studies in monkeys indicate that neurons in the lateral habenula (LHb), a nucleus that mediates communication between forebrain and midbrain structures, can increase their activity when an animal fails to receive an expected positive reward or receives a stimulus that predicts aversive conditions (that is, disappointment or anticipation of a negative outcome). LHb neurons project to, and modulate, dopamine-rich regions, such as the ventral tegmental area (VTA), that control reward-seeking behaviour and participate in depressive disorders. Here we show that in two learned helplessness models of depression, excitatory synapses onto LHb neurons projecting to the VTA are potentiated. Synaptic potentiation correlates with an animal's helplessness behaviour and is due to an enhanced presynaptic release probability. Depleting transmitter release by repeated electrical stimulation of LHb afferents, using a protocol that can be effective for patients who are depressed, markedly suppresses synaptic drive onto VTA-projecting LHb neurons in brain slices and can significantly reduce learned helplessness behaviour in rats. Our results indicate that increased presynaptic action onto LHb neurons contributes to the rodent learned helplessness model of depression.

  14. A hierarchical model for structure learning based on the physiological characteristics of neurons

    Institute of Scientific and Technical Information of China (English)

    WEI Hui

    2007-01-01

    Almost all applications of Artificial Neural Networks (ANNs) depend mainly on their memory ability.The characteristics of typical ANN models are fixed connections,with evolved weights,globalized representations,and globalized optimizations,all based on a mathematical approach.This makes those models to be deficient in robustness,efficiency of learning,capacity,anti-jamming between training sets,and correlativity of samples,etc.In this paper,we attempt to address these problems by adopting the characteristics of biological neurons in morphology and signal processing.A hierarchical neural network was designed and realized to implement structure learning and representations based on connected structures.The basic characteristics of this model are localized and random connections,field limitations of neuron fan-in and fan-out,dynamic behavior of neurons,and samples represented through different sub-circuits of neurons specialized into different response patterns.At the end of this paper,some important aspects of error correction,capacity,learning efficiency,and soundness of structural representation are analyzed theoretically.This paper has demonstrated the feasibility and advantages of structure learning and representation.This model can serve as a fundamental element of cognitive systems such as perception and associative memory.Key-words structure learning,representation,associative memory,computational neuroscience

  15. Context Fear Learning Specifically Activates Distinct Populations of Neurons in Amygdala and Hypothalamus

    Science.gov (United States)

    Trogrlic, Lidia; Wilson, Yvette M.; Newman, Andrew G.; Murphy, Mark

    2011-01-01

    The identity and distribution of neurons that are involved in any learning or memory event is not known. In previous studies, we identified a discrete population of neurons in the lateral amygdala that show learning-specific activation of a c-"fos"-regulated transgene following context fear conditioning. Here, we have extended these studies to…

  16. Learning Enhances Intrinsic Excitability in a Subset of Lateral Amygdala Neurons

    Science.gov (United States)

    Sehgal, Megha; Ehlers, Vanessa L.; Moyer, James R., Jr.

    2014-01-01

    Learning-induced modulation of neuronal intrinsic excitability is a metaplasticity mechanism that can impact the acquisition of new memories. Although the amygdala is important for emotional learning and other behaviors, including fear and anxiety, whether learning alters intrinsic excitability within the amygdala has received very little…

  17. Associative and sensorimotor learning for parenting involves mirror neurons under the influence of oxytocin

    OpenAIRE

    Ho, S. Shaun; MacDonald, Adam; Swain, James E.

    2014-01-01

    Mirror neuron–based associative learning may be understood according to associative learning theories, in addition to sensorimotor learning theories. This is important for a comprehensive understanding of the role of mirror neurons and related hormone modulators, such as oxytocin, in complex social interactions such as among parent–infant dyads and in examples of mirror neuron function that involve abnormal motor systems such as depression.

  18. Tissue Plasminogen Activator Induction in Purkinje Neurons After Cerebellar Motor Learning

    Science.gov (United States)

    Seeds, Nicholas W.; Williams, Brian L.; Bickford, Paula C.

    1995-12-01

    The cerebellar cortex is implicated in the learning of complex motor skills. This learning may require synaptic remodeling of Purkinje cell inputs. An extracellular serine protease, tissue plasminogen activator (tPA), is involved in remodeling various nonneural tissues and is associated with developing and regenerating neurons. In situ hybridization showed that expression of tPA messenger RNA was increased in the Purkinje neurons of rats within an hour of their being trained for a complex motor task. Antibody to tPA also showed the induction of tPA protein associated with cerebellar Purkinje cells. Thus, the induction of tPA during motor learning may play a role in activity-dependent synaptic plasticity.

  19. Programmed to learn? The ontogeny of mirror neurons

    NARCIS (Netherlands)

    Del Giudice, Marco; Manera, Valeria; Keysers, Christian

    Mirror neurons are increasingly recognized as a crucial substrate for many developmental processes, including imitation and social learning. Although there has been considerable progress in describing their function and localization in the primate and adult human brain, we still know little about

  20. Parallel and patterned optogenetic manipulation of neurons in the brain slice using a DMD-based projector.

    Science.gov (United States)

    Sakai, Seiichiro; Ueno, Kenichi; Ishizuka, Toru; Yawo, Hiromu

    2013-01-01

    Optical manipulation technologies greatly advanced the understanding of the neuronal network and its dysfunctions. To achieve patterned and parallel optical switching, we developed a microscopic illumination system using a commercial DMD-based projector and a software program. The spatiotemporal patterning of the system was evaluated using acute slices of the hippocampus. The neural activity was optically manipulated, positively by the combination of channelrhodopsin-2 (ChR2) and blue light, and negatively by the combination of archaerhodopsin-T (ArchT) and green light. It is suggested that our projector-managing optical system (PMOS) would effectively facilitate the optogenetic analyses of neurons and their circuits. Copyright © 2012 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  1. Complex population response of dorsal putamen neurons predicts the ability to learn.

    Science.gov (United States)

    Laquitaine, Steeve; Piron, Camille; Abellanas, David; Loewenstein, Yonatan; Boraud, Thomas

    2013-01-01

    Day-to-day variability in performance is a common experience. We investigated its neural correlate by studying learning behavior of monkeys in a two-alternative forced choice task, the two-armed bandit task. We found substantial session-to-session variability in the monkeys' learning behavior. Recording the activity of single dorsal putamen neurons we uncovered a dual function of this structure. It has been previously shown that a population of neurons in the DLP exhibits firing activity sensitive to the reward value of chosen actions. Here, we identify putative medium spiny neurons in the dorsal putamen that are cue-selective and whose activity builds up with learning. Remarkably we show that session-to-session changes in the size of this population and in the intensity with which this population encodes cue-selectivity is correlated with session-to-session changes in the ability to learn the task. Moreover, at the population level, dorsal putamen activity in the very beginning of the session is correlated with the performance at the end of the session, thus predicting whether the monkey will have a "good" or "bad" learning day. These results provide important insights on the neural basis of inter-temporal performance variability.

  2. Relationship between mathematical abstraction in learning parallel coordinates concept and performance in learning analytic geometry of pre-service mathematics teachers: an investigation

    Science.gov (United States)

    Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.

    2018-05-01

    As one of the non-conventional mathematics concepts, Parallel Coordinates is potential to be learned by pre-service mathematics teachers in order to give them experiences in constructing richer schemes and doing abstraction process. Unfortunately, the study related to this issue is still limited. This study wants to answer a research question “to what extent the abstraction process of pre-service mathematics teachers in learning concept of Parallel Coordinates could indicate their performance in learning Analytic Geometry”. This is a case study that part of a larger study in examining mathematical abstraction of pre-service mathematics teachers in learning non-conventional mathematics concept. Descriptive statistics method is used in this study to analyze the scores from three different tests: Cartesian Coordinate, Parallel Coordinates, and Analytic Geometry. The participants in this study consist of 45 pre-service mathematics teachers. The result shows that there is a linear association between the score on Cartesian Coordinate and Parallel Coordinates. There also found that the higher levels of the abstraction process in learning Parallel Coordinates are linearly associated with higher student achievement in Analytic Geometry. The result of this study shows that the concept of Parallel Coordinates has a significant role for pre-service mathematics teachers in learning Analytic Geometry.

  3. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    Science.gov (United States)

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  4. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  5. Gamma neurons mediate dopaminergic input during aversive olfactory memory formation in Drosophila.

    Science.gov (United States)

    Qin, Hongtao; Cressy, Michael; Li, Wanhe; Coravos, Jonathan S; Izzi, Stephanie A; Dubnau, Joshua

    2012-04-10

    Mushroom body (MB)-dependent olfactory learning in Drosophila provides a powerful model to investigate memory mechanisms. MBs integrate olfactory conditioned stimulus (CS) inputs with neuromodulatory reinforcement (unconditioned stimuli, US), which for aversive learning is thought to rely on dopaminergic (DA) signaling to DopR, a D1-like dopamine receptor expressed in MBs. A wealth of evidence suggests the conclusion that parallel and independent signaling occurs downstream of DopR within two MB neuron cell types, with each supporting half of memory performance. For instance, expression of the Rutabaga (Rut) adenylyl cyclase in γ neurons is sufficient to restore normal learning to rut mutants, whereas expression of Neurofibromatosis 1 (NF1) in α/β neurons is sufficient to rescue NF1 mutants. DopR mutations are the only case where memory performance is fully eliminated, consistent with the hypothesis that DopR receives the US inputs for both γ and α/β lobe traces. We demonstrate, however, that DopR expression in γ neurons is sufficient to fully support short- and long-term memory. We argue that DA-mediated CS-US association is formed in γ neurons followed by communication between γ and α/β neurons to drive consolidation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. γ neurons mediate dopaminergic input during aversive olfactory memory formation in Drosophila

    Science.gov (United States)

    Qin, H.; Cressy, M.; Li, W.; Coravos, J.; Izzi, S.; Dubnau, J.

    2012-01-01

    SUMMARY Mushroom body (MB) dependent olfactory learning in Drosophila provides a powerful model to investigate memory mechanisms. MBs integrate olfactory conditioned stimuli (CS) inputs with neuromodulatory reinforcement (unconditioned stimuli, US) [1, 2], which for aversive learning is thought to rely on dopaminergic (DA) signaling [3–6] to DopR, a D1-like dopamine receptor expressed in MB [7, 8]. A wealth of evidence suggests the conclusion that parallel and independent signaling occurs downstream of DopR within two MB neuron cell types, with each supporting half of memory performance. For instance, expression of the rutabaga adenylyl cyclase (rut) in γ neurons is sufficient to restore normal learning to rut mutants [9] whereas expression of Neurofibromatosis I (NFI) in α/β neurons is sufficient to rescue NF1 mutants [10, 11]. DopR mutations are the only case where memory performance is fully eliminated [7], consistent with the hypothesis that DopR receives the US inputs for both γ and α/β lobe traces. We demonstrate, however, that DopR expression in γ neurons is sufficient to fully support short (STM) and long-term memory (LTM). We argue that DA-mediated CS-US association is formed in γ neurons followed by communication between γ and α/β neurons to drive consolidation. PMID:22425153

  7. Examining Neuronal Connectivity and Its Role in Learning and Memory

    Science.gov (United States)

    Gala, Rohan

    Learning and long-term memory formation are accompanied with changes in the patterns and weights of synaptic connections in the underlying neuronal network. However, the fundamental rules that drive connectivity changes, and the precise structure-function relationships within neuronal networks remain elusive. Technological improvements over the last few decades have enabled the observation of large but specific subsets of neurons and their connections in unprecedented detail. Devising robust and automated computational methods is critical to distill information from ever-increasing volumes of raw experimental data. Moreover, statistical models and theoretical frameworks are required to interpret the data and assemble evidence into understanding of brain function. In this thesis, I first describe computational methods to reconstruct connectivity based on light microscopy imaging experiments. Next, I use these methods to quantify structural changes in connectivity based on in vivo time-lapse imaging experiments. Finally, I present a theoretical model of associative learning that can explain many stereotypical features of experimentally observed connectivity.

  8. Aversive learning shapes neuronal orientation tuning in human visual cortex.

    Science.gov (United States)

    McTeague, Lisa M; Gruss, L Forest; Keil, Andreas

    2015-07-28

    The responses of sensory cortical neurons are shaped by experience. As a result perceptual biases evolve, selectively facilitating the detection and identification of sensory events that are relevant for adaptive behaviour. Here we examine the involvement of human visual cortex in the formation of learned perceptual biases. We use classical aversive conditioning to associate one out of a series of oriented gratings with a noxious sound stimulus. After as few as two grating-sound pairings, visual cortical responses to the sound-paired grating show selective amplification. Furthermore, as learning progresses, responses to the orientations with greatest similarity to the sound-paired grating are increasingly suppressed, suggesting inhibitory interactions between orientation-selective neuronal populations. Changes in cortical connectivity between occipital and fronto-temporal regions mirror the changes in visuo-cortical response amplitudes. These findings suggest that short-term behaviourally driven retuning of human visual cortical neurons involves distal top-down projections as well as local inhibitory interactions.

  9. 'Re-zoning' proximal development in a parallel e-learning course ...

    African Journals Online (AJOL)

    'Re-zoning' proximal development in a parallel e-learning course. ... Journal Home > Vol 22, No 4 (2002) >. Log in or Register to get access to full text downloads. ... This twinning course was introduced to expand learning opportunities in what we ... face-to-face curriculum with less scheduled teaching time than previously.

  10. All-memristive neuromorphic computing with level-tuned neurons

    Science.gov (United States)

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-01

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  11. All-memristive neuromorphic computing with level-tuned neurons.

    Science.gov (United States)

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-02

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  12. A parallel ILP algorithm that incorporates incremental batch learning

    OpenAIRE

    Nuno Fonseca; Rui Camacho; Fernado Silva

    2003-01-01

    In this paper we tackle the problems of eciency and scala-bility faced by Inductive Logic Programming (ILP) systems. We proposethe use of parallelism to improve eciency and the use of an incrementalbatch learning to address the scalability problem. We describe a novelparallel algorithm that incorporates into ILP the method of incremen-tal batch learning. The theoretical complexity of the algorithm indicatesthat a linear speedup can be achieved.

  13. Expressions of multiple neuronal dynamics during sensorimotor learning in the motor cortex of behaving monkeys.

    Directory of Open Access Journals (Sweden)

    Yael Mandelblat-Cerf

    Full Text Available Previous studies support the notion that sensorimotor learning involves multiple processes. We investigated the neuronal basis of these processes by recording single-unit activity in motor cortex of non-human primates (Macaca fascicularis, during adaptation to force-field perturbations. Perturbed trials (reaching to one direction were practiced along with unperturbed trials (to other directions. The number of perturbed trials relative to the unperturbed ones was either low or high, in two separate practice schedules. Unsurprisingly, practice under high-rate resulted in faster learning with more pronounced generalization, as compared to the low-rate practice. However, generalization and retention of behavioral and neuronal effects following practice in high-rate were less stable; namely, the faster learning was forgotten faster. We examined two subgroups of cells and showed that, during learning, the changes in firing-rate in one subgroup depended on the number of practiced trials, but not on time. In contrast, changes in the second subgroup depended on time and practice; the changes in firing-rate, following the same number of perturbed trials, were larger under high-rate than low-rate learning. After learning, the neuronal changes gradually decayed. In the first subgroup, the decay pace did not depend on the practice rate, whereas in the second subgroup, the decay pace was greater following high-rate practice. This group shows neuronal representation that mirrors the behavioral performance, evolving faster but also decaying faster at learning under high-rate, as compared to low-rate. The results suggest that the stability of a new learned skill and its neuronal representation are affected by the acquisition schedule.

  14. Module Six: Parallel Circuits; Basic Electricity and Electronics Individualized Learning System.

    Science.gov (United States)

    Bureau of Naval Personnel, Washington, DC.

    In this module the student will learn the rules that govern the characteristics of parallel circuits; the relationships between voltage, current, resistance and power; and the results of common troubles in parallel circuits. The module is divided into four lessons: rules of voltage and current, rules for resistance and power, variational analysis,…

  15. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    Science.gov (United States)

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  16. Code-specific learning rules improve action selection by populations of spiking neurons.

    Science.gov (United States)

    Friedrich, Johannes; Urbanczik, Robert; Senn, Walter

    2014-08-01

    Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.

  17. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    Science.gov (United States)

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  18. Focal adhesion kinase regulates neuronal growth, synaptic plasticity and hippocampus-dependent spatial learning and memory.

    Science.gov (United States)

    Monje, Francisco J; Kim, Eun-Jung; Pollak, Daniela D; Cabatic, Maureen; Li, Lin; Baston, Arthur; Lubec, Gert

    2012-01-01

    The focal adhesion kinase (FAK) is a non-receptor tyrosine kinase abundantly expressed in the mammalian brain and highly enriched in neuronal growth cones. Inhibitory and facilitatory activities of FAK on neuronal growth have been reported and its role in neuritic outgrowth remains controversial. Unlike other tyrosine kinases, such as the neurotrophin receptors regulating neuronal growth and plasticity, the relevance of FAK for learning and memory in vivo has not been clearly defined yet. A comprehensive study aimed at determining the role of FAK in neuronal growth, neurotransmitter release and synaptic plasticity in hippocampal neurons and in hippocampus-dependent learning and memory was therefore undertaken using the mouse model. Gain- and loss-of-function experiments indicated that FAK is a critical regulator of hippocampal cell morphology. FAK mediated neurotrophin-induced neuritic outgrowth and FAK inhibition affected both miniature excitatory postsynaptic potentials and activity-dependent hippocampal long-term potentiation prompting us to explore the possible role of FAK in spatial learning and memory in vivo. Our data indicate that FAK has a growth-promoting effect, is importantly involved in the regulation of the synaptic function and mediates in vivo hippocampus-dependent spatial learning and memory. Copyright © 2011 S. Karger AG, Basel.

  19. Bidirectional Modulation of Intrinsic Excitability in Rat Prelimbic Cortex Neuronal Ensembles and Non-Ensembles after Operant Learning.

    Science.gov (United States)

    Whitaker, Leslie R; Warren, Brandon L; Venniro, Marco; Harte, Tyler C; McPherson, Kylie B; Beidel, Jennifer; Bossert, Jennifer M; Shaham, Yavin; Bonci, Antonello; Hope, Bruce T

    2017-09-06

    Learned associations between environmental stimuli and rewards drive goal-directed learning and motivated behavior. These memories are thought to be encoded by alterations within specific patterns of sparsely distributed neurons called neuronal ensembles that are activated selectively by reward-predictive stimuli. Here, we use the Fos promoter to identify strongly activated neuronal ensembles in rat prelimbic cortex (PLC) and assess altered intrinsic excitability after 10 d of operant food self-administration training (1 h/d). First, we used the Daun02 inactivation procedure in male FosLacZ-transgenic rats to ablate selectively Fos-expressing PLC neurons that were active during operant food self-administration. Selective ablation of these neurons decreased food seeking. We then used male FosGFP-transgenic rats to assess selective alterations of intrinsic excitability in Fos-expressing neuronal ensembles (FosGFP + ) that were activated during food self-administration and compared these with alterations in less activated non-ensemble neurons (FosGFP - ). Using whole-cell recordings of layer V pyramidal neurons in an ex vivo brain slice preparation, we found that operant self-administration increased excitability of FosGFP + neurons and decreased excitability of FosGFP - neurons. Increased excitability of FosGFP + neurons was driven by increased steady-state input resistance. Decreased excitability of FosGFP - neurons was driven by increased contribution of small-conductance calcium-activated potassium (SK) channels. Injections of the specific SK channel antagonist apamin into PLC increased Fos expression but had no effect on food seeking. Overall, operant learning increased intrinsic excitability of PLC Fos-expressing neuronal ensembles that play a role in food seeking but decreased intrinsic excitability of Fos - non-ensembles. SIGNIFICANCE STATEMENT Prefrontal cortex activity plays a critical role in operant learning, but the underlying cellular mechanisms are

  20. On Scalable Deep Learning and Parallelizing Gradient Descent

    CERN Document Server

    AUTHOR|(CDS)2129036; Möckel, Rico; Baranowski, Zbigniew; Canali, Luca

    Speeding up gradient based methods has been a subject of interest over the past years with many practical applications, especially with respect to Deep Learning. Despite the fact that many optimizations have been done on a hardware level, the convergence rate of very large models remains problematic. Therefore, data parallel methods next to mini-batch parallelism have been suggested to further decrease the training time of parameterized models using gradient based methods. Nevertheless, asynchronous optimization was considered too unstable for practical purposes due to a lacking understanding of the underlying mechanisms. Recently, a theoretical contribution has been made which defines asynchronous optimization in terms of (implicit) momentum due to the presence of a queuing model of gradients based on past parameterizations. This thesis mainly builds upon this work to construct a better understanding why asynchronous optimization shows proportionally more divergent behavior when the number of parallel worker...

  1. Roles for Drosophila Mushroom Body Neurons in Olfactory Learning and Memory

    Science.gov (United States)

    Zong, Lin; Tanaka, Nobuaki K.; Ito, Kei; Davis, Ronald L.; Akalal, David-Benjamin G.; Wilson, Curtis F.

    2006-01-01

    Olfactory learning assays in Drosophila have revealed that distinct brain structures known as mushroom bodies (MBs) are critical for the associative learning and memory of olfactory stimuli. However, the precise roles of the different neurons comprising the MBs are still under debate. The confusion surrounding the roles of the different neurons…

  2. PHYLOGENETIC ANALYSIS OF LEARNING-RELATED NEUROMODULATION IN MOLLUSCAN MECHANOSENSORY NEURONS.

    Science.gov (United States)

    Wright, William G; Kirschman, David; Rozen, Danny; Maynard, Barbara

    1996-12-01

    In spite of significant advances in our understanding of mechanisms of learning and memory in a variety of organisms, little is known about how such mechanisms evolve. Even mechanisms of simple forms of learning, such as habituation and sensitization, have not been studied phylogenetically. Here we begin an evolutionary analysis of learning-related neuromodulation in species related to the well-studied opisthobranch gastropod, Aplysia californica. In Aplysia, increased spike duration and excitability in mechanosensory neurons contribute to several forms of learning-related changes to defensive withdrawal reflexes. The modulatory transmitter serotonin (5-hydroxytryptamine, or 5-HT), is thought to play a critical role in producing these firing property changes. In the present study, we tested mechanosensory homologs of the tail-withdrawal reflex in species related to Aplysia for 5-HT-mediated increases in spike duration and excitability. Criteria used to identify homologous tail-sensory neurons included position, relative size, resting electrical properties, expression of a sensory neuron-specific protein, neuroanatomy, and receptive field. The four ingroup species studied (Aplysia californica, Dolabella auricularia, Bursatella leachii, and Dolabrifera dolabrifera) belong to two clades (two species each) within the family Aplysiidae. In the first clade (Aplysia/Dolabella), we found that the tail-sensory neurons of A. californica and tail-sensory homologs of a closely related species, D. auricularia, responded to bath-applied serotonin in essentially similar fashion: significant increases in spike duration as well as excitability. In the other clade (Dolabrifera/Bursatella), more distantly related to Aplysia, one species (B. leachii) showed spike broadening and increased excitability. However, the other species (D. dolabrifera) showed neither spike broadening nor increased excitability. The firing properties of tail-sensory homologs of D. dolabrifera were insensitive

  3. Neuronal Rac1 is required for learning-evoked neurogenesis

    DEFF Research Database (Denmark)

    Haditsch, Ursula; Anderson, Matthew P; Freewoman, Julia

    2013-01-01

    Hippocampus-dependent learning and memory relies on synaptic plasticity as well as network adaptations provided by the addition of adult-born neurons. We have previously shown that activity-induced intracellular signaling through the Rho family small GTPase Rac1 is necessary in forebrain projection...

  4. Parallel Alterations of Functional Connectivity during Execution and Imagination after Motor Imagery Learning

    Science.gov (United States)

    Zhang, Rushao; Hui, Mingqi; Long, Zhiying; Zhao, Xiaojie; Yao, Li

    2012-01-01

    Background Neural substrates underlying motor learning have been widely investigated with neuroimaging technologies. Investigations have illustrated the critical regions of motor learning and further revealed parallel alterations of functional activation during imagination and execution after learning. However, little is known about the functional connectivity associated with motor learning, especially motor imagery learning, although benefits from functional connectivity analysis attract more attention to the related explorations. We explored whether motor imagery (MI) and motor execution (ME) shared parallel alterations of functional connectivity after MI learning. Methodology/Principal Findings Graph theory analysis, which is widely used in functional connectivity exploration, was performed on the functional magnetic resonance imaging (fMRI) data of MI and ME tasks before and after 14 days of consecutive MI learning. The control group had no learning. Two measures, connectivity degree and interregional connectivity, were calculated and further assessed at a statistical level. Two interesting results were obtained: (1) The connectivity degree of the right posterior parietal lobe decreased in both MI and ME tasks after MI learning in the experimental group; (2) The parallel alterations of interregional connectivity related to the right posterior parietal lobe occurred in the supplementary motor area for both tasks. Conclusions/Significance These computational results may provide the following insights: (1) The establishment of motor schema through MI learning may induce the significant decrease of connectivity degree in the posterior parietal lobe; (2) The decreased interregional connectivity between the supplementary motor area and the right posterior parietal lobe in post-test implicates the dissociation between motor learning and task performing. These findings and explanations further revealed the neural substrates underpinning MI learning and supported that

  5. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2015-01-01

    Full Text Available Artificial neural networks (ANNs have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  6. Neuronal mechanisms of motor learning and motor memory consolidation in healthy old adults.

    Science.gov (United States)

    Berghuis, K M M; Veldman, M P; Solnik, S; Koch, G; Zijdewind, I; Hortobágyi, T

    2015-06-01

    It is controversial whether or not old adults are capable of learning new motor skills and consolidate the performance gains into motor memory in the offline period. The underlying neuronal mechanisms are equally unclear. We determined the magnitude of motor learning and motor memory consolidation in healthy old adults and examined if specific metrics of neuronal excitability measured by magnetic brain stimulation mediate the practice and retention effects. Eleven healthy old adults practiced a wrist extension-flexion visuomotor skill for 20 min (MP, 71.3 years), while a second group only watched the templates without movements (attentional control, AC, n = 11, 70.5 years). There was 40 % motor learning in MP but none in AC (interaction, p learn a new motor skill and consolidate the learned skill into motor memory, processes that are most likely mediated by disinhibitory mechanisms. These results are relevant for the increasing number of old adults who need to learn and relearn movements during motor rehabilitation.

  7. Span: spike pattern association neuron for learning spatio-temporal spike patterns.

    Science.gov (United States)

    Mohemmed, Ammar; Schliebs, Stefan; Matsuda, Satoshi; Kasabov, Nikola

    2012-08-01

    Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN - a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes. The idea of the proposed algorithm is to transform spike trains during the learning phase into analog signals so that common mathematical operations can be performed on them. Using this conversion, it is possible to apply the well-known Widrow-Hoff rule directly to the transformed spike trains in order to adjust the synaptic weights and to achieve a desired input/output spike behavior of the neuron. In the presented experimental analysis, the proposed learning algorithm is evaluated regarding its learning capabilities, its memory capacity, its robustness to noisy stimuli and its classification performance. Differences and similarities of SPAN regarding two related algorithms, ReSuMe and Chronotron, are discussed.

  8. A smart-pixel holographic competitive learning network

    Science.gov (United States)

    Slagle, Timothy Michael

    Neural networks are adaptive classifiers which modify their decision boundaries based on feedback from externally- or internally-generated error signals. Optics is an attractive technology for neural network implementation because it offers the possibility of parallel, nearly instantaneous computation of the weighted neuron inputs by the propagation of light through the optical system. Using current optical device technology, system performance levels of 3 × 1011 connection updates per second can be achieved. This thesis presents an architecture for an optical competitive learning network which offers advantages over previous optical implementations, including smart-pixel-based optical neurons, phase- conjugate self-alignment of a single neuron plane, and high-density, parallel-access weight storage, interconnection, and learning in a volume hologram. The competitive learning algorithm with modifications for optical implementation is described, and algorithm simulations are performed for an example problem. The optical competitive learning architecture is then introduced. The optical system is simulated using the ``beamprop'' algorithm at the level of light propagating through the system components, and results showing competitive learning operation in agreement with the algorithm simulations are presented. The optical competitive learning requires a non-linear, non-local ``winner-take-all'' (WTA) neuron function. Custom-designed smart-pixel WTA neuron arrays were fabricated using CMOS VLSI/liquid crystal technology. Results of laboratory tests of the WTA arrays' switching characteristics, time response, and uniformity are then presented. The system uses a phase-conjugate mirror to write the self-aligning interconnection weight holograms, and energy gain is required from the reflection to minimize erasure of the existing weights. An experimental system for characterizing the PCM response is described. Useful gains of 20 were obtained with a polarization

  9. Learning-related brain hemispheric dominance in sleeping songbirds.

    Science.gov (United States)

    Moorman, Sanne; Gobes, Sharon M H; van de Kamp, Ferdinand C; Zandbergen, Matthijs A; Bolhuis, Johan J

    2015-03-12

    There are striking behavioural and neural parallels between the acquisition of speech in humans and song learning in songbirds. In humans, language-related brain activation is mostly lateralised to the left hemisphere. During language acquisition in humans, brain hemispheric lateralisation develops as language proficiency increases. Sleep is important for the formation of long-term memory, in humans as well as in other animals, including songbirds. Here, we measured neuronal activation (as the expression pattern of the immediate early gene ZENK) during sleep in juvenile zebra finch males that were still learning their songs from a tutor. We found that during sleep, there was learning-dependent lateralisation of spontaneous neuronal activation in the caudomedial nidopallium (NCM), a secondary auditory brain region that is involved in tutor song memory, while there was right hemisphere dominance of neuronal activation in HVC (used as a proper name), a premotor nucleus that is involved in song production and sensorimotor learning. Specifically, in the NCM, birds that imitated their tutors well were left dominant, while poor imitators were right dominant, similar to language-proficiency related lateralisation in humans. Given the avian-human parallels, lateralised neural activation during sleep may also be important for speech and language acquisition in human infants.

  10. Learning-related brain hemispheric dominance in sleeping songbirds

    Science.gov (United States)

    Moorman, Sanne; Gobes, Sharon M. H.; van de Kamp, Ferdinand C.; Zandbergen, Matthijs A.; Bolhuis, Johan J.

    2015-01-01

    There are striking behavioural and neural parallels between the acquisition of speech in humans and song learning in songbirds. In humans, language-related brain activation is mostly lateralised to the left hemisphere. During language acquisition in humans, brain hemispheric lateralisation develops as language proficiency increases. Sleep is important for the formation of long-term memory, in humans as well as in other animals, including songbirds. Here, we measured neuronal activation (as the expression pattern of the immediate early gene ZENK) during sleep in juvenile zebra finch males that were still learning their songs from a tutor. We found that during sleep, there was learning-dependent lateralisation of spontaneous neuronal activation in the caudomedial nidopallium (NCM), a secondary auditory brain region that is involved in tutor song memory, while there was right hemisphere dominance of neuronal activation in HVC (used as a proper name), a premotor nucleus that is involved in song production and sensorimotor learning. Specifically, in the NCM, birds that imitated their tutors well were left dominant, while poor imitators were right dominant, similar to language-proficiency related lateralisation in humans. Given the avian-human parallels, lateralised neural activation during sleep may also be important for speech and language acquisition in human infants. PMID:25761654

  11. Neurons with two sites of synaptic integration learn invariant representations.

    Science.gov (United States)

    Körding, K P; König, P

    2001-12-01

    Neurons in mammalian cerebral cortex combine specific responses with respect to some stimulus features with invariant responses to other stimulus features. For example, in primary visual cortex, complex cells code for orientation of a contour but ignore its position to a certain degree. In higher areas, such as the inferotemporal cortex, translation-invariant, rotation-invariant, and even view point-invariant responses can be observed. Such properties are of obvious interest to artificial systems performing tasks like pattern recognition. It remains to be resolved how such response properties develop in biological systems. Here we present an unsupervised learning rule that addresses this problem. It is based on a neuron model with two sites of synaptic integration, allowing qualitatively different effects of input to basal and apical dendritic trees, respectively. Without supervision, the system learns to extract invariance properties using temporal or spatial continuity of stimuli. Furthermore, top-down information can be smoothly integrated in the same framework. Thus, this model lends a physiological implementation to approaches of unsupervised learning of invariant-response properties.

  12. Mirror Neurons, Embodied Cognitive Agents and Imitation Learning

    Czech Academy of Sciences Publication Activity Database

    Wiedermann, Jiří

    2003-01-01

    Roč. 22, č. 6 (2003), s. 545-559 ISSN 1335-9150 R&D Projects: GA ČR GA201/02/1456 Institutional research plan: CEZ:AV0Z1030915 Keywords : complete agents * mirror neurons * embodied cognition * imitation learning * sensorimotor control Subject RIV: BA - General Mathematics Impact factor: 0.254, year: 2003 http://www.cai.sk/ojs/index.php/cai/article/view/468

  13. VTA GABA neurons modulate specific learning behaviours through the control of dopamine and cholinergic systems

    Directory of Open Access Journals (Sweden)

    Meaghan C Creed

    2014-01-01

    Full Text Available The mesolimbic reward system is primarily comprised of the ventral tegmental area (VTA and the nucleus accumbens (NAc as well as their afferent and efferent connections. This circuitry is essential for learning about stimuli associated with motivationally-relevant outcomes. Moreover, addictive drugs affect and remodel this system, which may underlie their addictive properties. In addition to DA neurons, the VTA also contains approximately 30% ɣ-aminobutyric acid (GABA neurons. The task of signalling both rewarding and aversive events from the VTA to the NAc has mostly been ascribed to DA neurons and the role of GABA neurons has been largely neglected until recently. GABA neurons provide local inhibition of DA neurons and also long-range inhibition of projection regions, including the NAc. Here we review studies using a combination of in vivo and ex vivo electrophysiology, pharmacogenetic and optogenetic manipulations that have characterized the functional neuroanatomy of inhibitory circuits in the mesolimbic system, and describe how GABA neurons of the VTA regulate reward and aversion-related learning. We also discuss pharmacogenetic manipulation of this system with benzodiazepines (BDZs, a class of addictive drugs, which act directly on GABAA receptors located on GABA neurons of the VTA. The results gathered with each of these approaches suggest that VTA GABA neurons bi-directionally modulate activity of local DA neurons, underlying reward or aversion at the behavioural level. Conversely, long-range GABA projections from the VTA to the NAc selectively target cholinergic interneurons (CINs to pause their firing and temporarily reduce cholinergic tone in the NAc, which modulates associative learning. Further characterization of inhibitory circuit function within and beyond the VTA is needed in order to fully understand the function of the mesolimbic system under normal and pathological conditions.

  14. Neuron recycling for learning the alphabetic principles.

    Science.gov (United States)

    Scliar-Cabral, Leonor

    2014-01-01

    The main purpose of this paper is to discuss an approach to the phonic method of learning-teaching early literacy development, namely that the visual neurons must be recycled to recognize the small differences among pertinent letter features. In addition to the challenge of segmenting the speech chain and the syllable for learning the alphabetic principles, neuroscience has demonstrated another major challenge: neurons in mammals are programmed to process visual signals symmetrically. In order to develop early literacy, visual neurons must be recycled to overcome this initial programming together with phonological awareness, expanding it with the ability to delimit words, including clitics, as well as assigning stress to words. To achieve this goal, Scliar's Early Literacy Development System was proposed and tested. Sixteen subjects (10 girls and 6 boys) comprised the experimental group (mean age 6.02 years), and 16 subjects (7 girls and 9 boys) formed the control group (mean age 6.10 years). The research instruments were a psychosociolinguistic questionnaire to reveal the subjects' profile and a post-test battery of tests. At the beginning of the experiment, the experimental group was submitted to an intervention program based on Scliar's Early Literacy Development System. One of the tests is discussed in this paper, the grapheme-phoneme test: subjects had to read aloud a pseudoword with 4 graphemes, signaled by the experimenter and designed to assess the subject's ability to convert a grapheme into its correspondent phoneme. The average value for the test group was 25.0 correct answers (SD = 11.4); the control group had an average of 14.3 correct answers (SD = 10.6): The difference was significant. The experimental results validate Scliar's Early Literacy Development System and indicate the need to redesign early literacy development methods. © 2014 S. Karger AG, Basel.

  15. Neuron-glia metabolic coupling and plasticity.

    Science.gov (United States)

    Magistretti, Pierre J

    2006-06-01

    The coupling between synaptic activity and glucose utilization (neurometabolic coupling) is a central physiological principle of brain function that has provided the basis for 2-deoxyglucose-based functional imaging with positron emission tomography (PET). Astrocytes play a central role in neurometabolic coupling, and the basic mechanism involves glutamate-stimulated aerobic glycolysis; the sodium-coupled reuptake of glutamate by astrocytes and the ensuing activation of the Na-K-ATPase triggers glucose uptake and processing via glycolysis, resulting in the release of lactate from astrocytes. Lactate can then contribute to the activity-dependent fuelling of the neuronal energy demands associated with synaptic transmission. An operational model, the 'astrocyte-neuron lactate shuttle', is supported experimentally by a large body of evidence, which provides a molecular and cellular basis for interpreting data obtained from functional brain imaging studies. In addition, this neuron-glia metabolic coupling undergoes plastic adaptations in parallel with adaptive mechanisms that characterize synaptic plasticity. Thus, distinct subregions of the hippocampus are metabolically active at different time points during spatial learning tasks, suggesting that a type of metabolic plasticity, involving by definition neuron-glia coupling, occurs during learning. In addition, marked variations in the expression of genes involved in glial glycogen metabolism are observed during the sleep-wake cycle, with in particular a marked induction of expression of the gene encoding for protein targeting to glycogen (PTG) following sleep deprivation. These data suggest that glial metabolic plasticity is likely to be concomitant with synaptic plasticity.

  16. Real-time computing platform for spiking neurons (RT-spike).

    Science.gov (United States)

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  17. Autism and the mirror neuron system: Insights from learning and teaching

    OpenAIRE

    Vivanti, G; Rogers, SJ

    2014-01-01

    Individuals with autism have difficulties in social learning domains which typically involve mirror neuron system (MNS) activation. However, the precise role of the MNS in the development of autism and its relevance to treatment remain unclear. In this paper, we argue that three distinct aspects of social learning are critical for advancing knowledge in this area: (i) the mechanisms that allow for the implicit mapping of and learning from others' behaviour, (ii) the motivation to attend to an...

  18. Parallelization of the ROOT Machine Learning Methods

    CERN Document Server

    Vakilipourtakalou, Pourya

    2016-01-01

    Today computation is an inseparable part of scientific research. Specially in Particle Physics when there is a classification problem like discrimination of Signals from Backgrounds originating from the collisions of particles. On the other hand, Monte Carlo simulations can be used in order to generate a known data set of Signals and Backgrounds based on theoretical physics. The aim of Machine Learning is to train some algorithms on known data set and then apply these trained algorithms to the unknown data sets. However, the most common framework for data analysis in Particle Physics is ROOT. In order to use Machine Learning methods, a Toolkit for Multivariate Data Analysis (TMVA) has been added to ROOT. The major consideration in this report is the parallelization of some TMVA methods, specially Cross-Validation and BDT.

  19. Activity strengths of cortical glutamatergic and GABAergic neurons are correlated with transgenerational inheritance of learning ability.

    Science.gov (United States)

    Liu, Yulong; Ge, Rongjing; Zhao, Xin; Guo, Rui; Huang, Li; Zhao, Shidi; Guan, Sudong; Lu, Wei; Cui, Shan; Wang, Shirlene; Wang, Jin-Hui

    2017-12-22

    The capabilities of learning and memory in parents are presumably transmitted to their offsprings, in which genetic codes and epigenetic regulations are thought as molecular bases. As neural plasticity occurs during memory formation as cellular mechanism, we aim to examine the correlation of activity strengths at cortical glutamatergic and GABAergic neurons to the transgenerational inheritance of learning ability. In a mouse model of associative learning, paired whisker and odor stimulations led to odorant-induced whisker motion, whose onset appeared fast (high learning efficiency, HLE) or slow (low learning efficiency, LLE). HLE male and female mice, HLE female and LLE male mice as well as HLE male and LLE female mice were cross-mated to have their first generation of offsprings, filials (F1). The onset of odorant-induced whisker motion appeared a sequence of high-to-low efficiency in three groups of F1 mice that were from HLE male and female mice, HLE female and LLE male mice as well as HLE male and LLE female mice. Activities related to glutamatergic neurons in barrel cortices appeared a sequence of high-to-low strength in these F1 mice from HLE male and female mice, HLE female and LLE male mice as well as HLE male and LLE female mice. Activities related to GABAergic neurons in barrel cortices appeared a sequence of low-to-high strength in these F1 mice from HLE male and female mice, HLE female and LLE male mice as well as HLE male and LLE female mice. Neuronal activity strength was linearly correlated to learning efficiency among three groups. Thus, the coordinated activities at glutamatergic and GABAergic neurons may constitute the cellular basis for the transgenerational inheritance of learning ability.

  20. Learning to read ‘properly’ by moving between parallel literacy classes

    OpenAIRE

    Robertson, Leena Helavaara

    2006-01-01

    This paper explores what kinds of advantages and strengths the process of learning to read simultaneously in different languages and scripts might bring about. It is based on a socio-cultural view of learning and literacy and examines early literacy in three parallel literacy classes in Watford, England. It analyses the learning experiences of five bilingual children who are of second or third generation Pakistani background. At the start of the study the children are five years old and they ...

  1. Learning causes reorganization of neuronal firing patterns to represent related experiences within a hippocampal schema.

    Science.gov (United States)

    McKenzie, Sam; Robinson, Nick T M; Herrera, Lauren; Churchill, Jordana C; Eichenbaum, Howard

    2013-06-19

    According to schema theory as proposed by Piaget and Bartlett, learning involves the assimilation of new memories into networks of preexisting knowledge, as well as alteration of the original networks to accommodate the new information. Recent evidence has shown that rats form a schema of goal locations and that the hippocampus plays an essential role in adding new memories to the spatial schema. Here we examined the nature of hippocampal contributions to schema updating by monitoring firing patterns of multiple CA1 neurons as rats learned new goal locations in an environment in which there already were multiple goals. Before new learning, many neurons that fired on arrival at one goal location also fired at other goals, whereas ensemble activity patterns also distinguished different goal events, thus constituting a neural representation that linked distinct goals within a spatial schema. During new learning, some neurons began to fire as animals approached the new goals. These were primarily the same neurons that fired at original goals, the activity patterns at new goals were similar to those associated with the original goals, and new learning also produced changes in the preexisting goal-related firing patterns. After learning, activity patterns associated with the new and original goals gradually diverged, such that initial generalization was followed by a prolonged period in which new memories became distinguished within the ensemble representation. These findings support the view that consolidation involves assimilation of new memories into preexisting neural networks that accommodate relationships among new and existing memories.

  2. DL-ReSuMe: A Delay Learning-Based Remote Supervised Method for Spiking Neurons.

    Science.gov (United States)

    Taherkhani, Aboozar; Belatreche, Ammar; Li, Yuhua; Maguire, Liam P

    2015-12-01

    Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe.

  3. Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent

    Directory of Open Access Journals (Sweden)

    Zunyi Tang

    2013-01-01

    Full Text Available Sparse representation of signals via an overcomplete dictionary has recently received much attention as it has produced promising results in various applications. Since the nonnegativities of the signals and the dictionary are required in some applications, for example, multispectral data analysis, the conventional dictionary learning methods imposed simply with nonnegativity may become inapplicable. In this paper, we propose a novel method for learning a nonnegative, overcomplete dictionary for such a case. This is accomplished by posing the sparse representation of nonnegative signals as a problem of nonnegative matrix factorization (NMF with a sparsity constraint. By employing the coordinate descent strategy for optimization and extending it to multivariable case for processing in parallel, we develop a so-called parallel coordinate descent dictionary learning (PCDDL algorithm, which is structured by iteratively solving the two optimal problems, the learning process of the dictionary and the estimating process of the coefficients for constructing the signals. Numerical experiments demonstrate that the proposed algorithm performs better than the conventional nonnegative K-SVD (NN-KSVD algorithm and several other algorithms for comparison. What is more, its computational consumption is remarkably lower than that of the compared algorithms.

  4. SPAN: spike pattern association neuron for learning spatio-temporal sequences

    OpenAIRE

    Mohemmed, A; Schliebs, S; Matsuda, S; Kasabov, N

    2012-01-01

    Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN — a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the prec...

  5. Contextual Learning Requires Functional Diversity at Excitatory and Inhibitory Synapses onto CA1 Pyramidal Neurons

    Directory of Open Access Journals (Sweden)

    Dai Mitsushima

    2015-01-01

    Full Text Available Although the hippocampus is processing temporal and spatial information in particular context, the encoding rule creating memory is completely unknown. To examine the mechanism, we trained rats on an inhibitory avoidance (IA task, a hippocampus-dependent rapid one-trial contextual learning paradigm. By combining Herpes virus-mediated in vivo gene delivery with in vitro patch-clamp recordings, I reported contextual learning drives GluR1-containing AMPA receptors into CA3-CA1 synapses. The molecular event is required for contextual memory, since bilateral expression of delivery blocker in CA1 successfully blocked IA learning. Moreover, I found a logarithmic correlation between the number of delivery blocking cells and learning performance. Considering that one all-or-none device can process 1-bit of data per clock (Nobert Wiener 1961, the logarithmic correlation may provides evidence that CA1 neurons transmit essential data of contextual information. Further, I recently reported critical role of acetylcholine as an intrinsic trigger of learning-dependent synaptic plasticity. IA training induced ACh release in CA1 that strengthened not only AMPA receptor-mediated excitatory synapses, but also GABAA receptor-mediated inhibitory synapses on each CA1 neuron. More importantly, IA-trained rats showed individually different excitatory and inhibitory synaptic inputs with wide variation on each CA1 neuron. Here I propose a new hypothesis that the diversity of synaptic inputs on CA1 neurons may depict cell-specific outputs processing experienced episodes after training.

  6. Learning to see the difference specifically alters the most informative V4 neurons.

    Science.gov (United States)

    Raiguel, Steven; Vogels, Rufin; Mysore, Santosh G; Orban, Guy A

    2006-06-14

    Perceptual learning is an instance of adult plasticity whereby training in a sensory (e.g., a visual task) results in neuronal changes leading to an improved ability to perform the task. Yet studies in primary visual cortex have found that changes in neuronal response properties were relatively modest. The present study examines the effects of training in an orientation discrimination task on the response properties of V4 neurons in awake rhesus monkeys. Results indicate that the changes induced in V4 are indeed larger than those in V1. Nonspecific effects of training included a decrease in response variance, and an increase in overall orientation selectivity in V4. The orientation-specific changes involved a local steepening in the orientation tuning curve around the trained orientation that selectively improved orientation discriminability at the trained orientation. Moreover, these changes were largely confined to the population of neurons whose orientation tuning was optimal for signaling small differences in orientation at the trained orientation. Finally, the modifications were restricted to the part of the tuning curve close to the trained orientation. Thus, we conclude that it is the most informative V4 neurons, those most directly involved in the discrimination, that are specifically modified by perceptual learning.

  7. Deciphering mirror neurons: rational decision versus associative learning.

    Science.gov (United States)

    Khalil, Elias L

    2014-04-01

    The rational-decision approach is superior to the associative-learning approach of Cook et al. at explaining why mirror neurons fire or do not fire - even when the stimulus is the same. The rational-decision approach is superior because it starts with the analysis of the intention of the organism, that is, with the identification of the specific objective or goal that the organism is trying to maximize.

  8. Neuronal mechanisms of motor learning and motor memory consolidation in healthy old adults

    NARCIS (Netherlands)

    Berghuis, K. M. M.; Veldman, M. P.; Solnik, S.; Koch, G.; Zijdewind, I.; Hortobagyi, T.

    It is controversial whether or not old adults are capable of learning new motor skills and consolidate the performance gains into motor memory in the offline period. The underlying neuronal mechanisms are equally unclear. We determined the magnitude of motor learning and motor memory consolidation

  9. The Mirror Neuron System and Observational Learning: Implications for the Effectiveness of Dynamic Visualizations

    Science.gov (United States)

    van Gog, Tamara; Paas, Fred; Marcus, Nadine; Ayres, Paul; Sweller, John

    2009-01-01

    Learning by observing and imitating others has long been recognized as constituting a powerful learning strategy for humans. Recent findings from neuroscience research, more specifically on the mirror neuron system, begin to provide insight into the neural bases of learning by observation and imitation. These findings are discussed here, along…

  10. Target-Dependent Structural Changes Accompanying Long-Term Synaptic Facilitation in Aplysia Neurons

    Science.gov (United States)

    Glanzman, David L.; Kandel, Eric R.; Schacher, Samuel

    1990-08-01

    The mechanisms underlying structural changes that accompany learning and memory have been difficult to investigate in the intact nervous system. In order to make these changes more accessible for experimental analysis, dissociated cell culture and low-light-level video microscopy were used to examine Aplysia sensory neurons in the presence or absence of their target cells. Repeated applications of serotonin, a facilitating transmitter important in behavioral dishabituation and sensitization, produced growth of the sensory neurons that paralleled the long-term enhancement of synaptic strength. This growth required the presence of the postsynaptic motor neuron. Thus, both the structural changes and the synaptic facilitation of Aplysia sensorimotor synapses accompanying long-term behavioral sensitization can be produced in vitro by applying a single facilitating transmitter repeatedly. These structural changes depend on an interaction of the presynaptic neuron with an appropriate postsynaptic target.

  11. Recording single neurons' action potentials from freely moving pigeons across three stages of learning.

    Science.gov (United States)

    Starosta, Sarah; Stüttgen, Maik C; Güntürkün, Onur

    2014-06-02

    While the subject of learning has attracted immense interest from both behavioral and neural scientists, only relatively few investigators have observed single-neuron activity while animals are acquiring an operantly conditioned response, or when that response is extinguished. But even in these cases, observation periods usually encompass only a single stage of learning, i.e. acquisition or extinction, but not both (exceptions include protocols employing reversal learning; see Bingman et al.(1) for an example). However, acquisition and extinction entail different learning mechanisms and are therefore expected to be accompanied by different types and/or loci of neural plasticity. Accordingly, we developed a behavioral paradigm which institutes three stages of learning in a single behavioral session and which is well suited for the simultaneous recording of single neurons' action potentials. Animals are trained on a single-interval forced choice task which requires mapping each of two possible choice responses to the presentation of different novel visual stimuli (acquisition). After having reached a predefined performance criterion, one of the two choice responses is no longer reinforced (extinction). Following a certain decrement in performance level, correct responses are reinforced again (reacquisition). By using a new set of stimuli in every session, animals can undergo the acquisition-extinction-reacquisition process repeatedly. Because all three stages of learning occur in a single behavioral session, the paradigm is ideal for the simultaneous observation of the spiking output of multiple single neurons. We use pigeons as model systems, but the task can easily be adapted to any other species capable of conditioned discrimination learning.

  12. Adaptive gain modulation in V1 explains contextual modifications during bisection learning.

    Directory of Open Access Journals (Sweden)

    Roland Schäfer

    2009-12-01

    Full Text Available The neuronal processing of visual stimuli in primary visual cortex (V1 can be modified by perceptual training. Training in bisection discrimination, for instance, changes the contextual interactions in V1 elicited by parallel lines. Before training, two parallel lines inhibit their individual V1-responses. After bisection training, inhibition turns into non-symmetric excitation while performing the bisection task. Yet, the receptive field of the V1 neurons evaluated by a single line does not change during task performance. We present a model of recurrent processing in V1 where the neuronal gain can be modulated by a global attentional signal. Perceptual learning mainly consists in strengthening this attentional signal, leading to a more effective gain modulation. The model reproduces both the psychophysical results on bisection learning and the modified contextual interactions observed in V1 during task performance. It makes several predictions, for instance that imagery training should improve the performance, or that a slight stimulus wiggling can strongly affect the representation in V1 while performing the task. We conclude that strengthening a top-down induced gain increase can explain perceptual learning, and that this top-down signal can modify lateral interactions within V1, without significantly changing the classical receptive field of V1 neurons.

  13. Adaptive Load Balancing of Parallel Applications with Multi-Agent Reinforcement Learning on Heterogeneous Systems

    Directory of Open Access Journals (Sweden)

    Johan Parent

    2004-01-01

    Full Text Available We report on the improvements that can be achieved by applying machine learning techniques, in particular reinforcement learning, for the dynamic load balancing of parallel applications. The applications being considered in this paper are coarse grain data intensive applications. Such applications put high pressure on the interconnect of the hardware. Synchronization and load balancing in complex, heterogeneous networks need fast, flexible, adaptive load balancing algorithms. Viewing a parallel application as a one-state coordination game in the framework of multi-agent reinforcement learning, and by using a recently introduced multi-agent exploration technique, we are able to improve upon the classic job farming approach. The improvements are achieved with limited computation and communication overhead.

  14. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.

    Science.gov (United States)

    Burbank, Kendra S

    2015-12-01

    The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.

  15. Drosophila Learn Opposing Components of a Compound Food Stimulus

    Science.gov (United States)

    Das, Gaurav; Klappenbach, Martín; Vrontou, Eleftheria; Perisse, Emmanuel; Clark, Christopher M.; Burke, Christopher J.; Waddell, Scott

    2014-01-01

    Summary Dopaminergic neurons provide value signals in mammals and insects [1–3]. During Drosophila olfactory learning, distinct subsets of dopaminergic neurons appear to assign either positive or negative value to odor representations in mushroom body neurons [4–9]. However, it is not known how flies evaluate substances that have mixed valence. Here we show that flies form short-lived aversive olfactory memories when trained with odors and sugars that are contaminated with the common insect repellent DEET. This DEET-aversive learning required the MB-MP1 dopaminergic neurons that are also required for shock learning [7]. Moreover, differential conditioning with DEET versus shock suggests that formation of these distinct aversive olfactory memories relies on a common negatively reinforcing dopaminergic mechanism. Surprisingly, as time passed after training, the behavior of DEET-sugar-trained flies reversed from conditioned odor avoidance into odor approach. In addition, flies that were compromised for reward learning exhibited a more robust and longer-lived aversive-DEET memory. These data demonstrate that flies independently process the DEET and sugar components to form parallel aversive and appetitive olfactory memories, with distinct kinetics, that compete to guide learned behavior. PMID:25042590

  16. A Cross-Correlated Delay Shift Supervised Learning Method for Spiking Neurons with Application to Interictal Spike Detection in Epilepsy.

    Science.gov (United States)

    Guo, Lilin; Wang, Zhenzhong; Cabrerizo, Mercedes; Adjouadi, Malek

    2017-05-01

    This study introduces a novel learning algorithm for spiking neurons, called CCDS, which is able to learn and reproduce arbitrary spike patterns in a supervised fashion allowing the processing of spatiotemporal information encoded in the precise timing of spikes. Unlike the Remote Supervised Method (ReSuMe), synapse delays and axonal delays in CCDS are variants which are modulated together with weights during learning. The CCDS rule is both biologically plausible and computationally efficient. The properties of this learning rule are investigated extensively through experimental evaluations in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. Results presented show that the CCDS learning method achieves learning accuracy and learning speed comparable with ReSuMe, but improves classification accuracy when compared to both the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. The merit of CCDS rule is further validated on a practical example involving the automated detection of interictal spikes in EEG records of patients with epilepsy. Results again show that with proper encoding, the CCDS rule achieves good recognition performance.

  17. Developmental time windows for axon growth influence neuronal network topology.

    Science.gov (United States)

    Lim, Sol; Kaiser, Marcus

    2015-04-01

    Early brain connectivity development consists of multiple stages: birth of neurons, their migration and the subsequent growth of axons and dendrites. Each stage occurs within a certain period of time depending on types of neurons and cortical layers. Forming synapses between neurons either by growing axons starting at similar times for all neurons (much-overlapped time windows) or at different time points (less-overlapped) may affect the topological and spatial properties of neuronal networks. Here, we explore the extreme cases of axon formation during early development, either starting at the same time for all neurons (parallel, i.e., maximally overlapped time windows) or occurring for each neuron separately one neuron after another (serial, i.e., no overlaps in time windows). For both cases, the number of potential and established synapses remained comparable. Topological and spatial properties, however, differed: Neurons that started axon growth early on in serial growth achieved higher out-degrees, higher local efficiency and longer axon lengths while neurons demonstrated more homogeneous connectivity patterns for parallel growth. Second, connection probability decreased more rapidly with distance between neurons for parallel growth than for serial growth. Third, bidirectional connections were more numerous for parallel growth. Finally, we tested our predictions with C. elegans data. Together, this indicates that time windows for axon growth influence the topological and spatial properties of neuronal networks opening up the possibility to a posteriori estimate developmental mechanisms based on network properties of a developed network.

  18. A Model to Explain the Emergence of Reward Expectancy neurons using Reinforcement Learning and Neural Network

    OpenAIRE

    Shinya, Ishii; Munetaka, Shidara; Katsunari, Shibata

    2006-01-01

    In an experiment of multi-trial task to obtain a reward, reward expectancy neurons,###which responded only in the non-reward trials that are necessary to advance###toward the reward, have been observed in the anterior cingulate cortex of monkeys.###In this paper, to explain the emergence of the reward expectancy neuron in###terms of reinforcement learning theory, a model that consists of a recurrent neural###network trained based on reinforcement learning is proposed. The analysis of the###hi...

  19. A real-time hybrid neuron network for highly parallel cognitive systems.

    Science.gov (United States)

    Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene

    2016-08-01

    For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.

  20. Bidirectional coupling between astrocytes and neurons mediates learning and dynamic coordination in the brain: a multiple modeling approach.

    Directory of Open Access Journals (Sweden)

    John J Wade

    Full Text Available In recent years research suggests that astrocyte networks, in addition to nutrient and waste processing functions, regulate both structural and synaptic plasticity. To understand the biological mechanisms that underpin such plasticity requires the development of cell level models that capture the mutual interaction between astrocytes and neurons. This paper presents a detailed model of bidirectional signaling between astrocytes and neurons (the astrocyte-neuron model or AN model which yields new insights into the computational role of astrocyte-neuronal coupling. From a set of modeling studies we demonstrate two significant findings. Firstly, that spatial signaling via astrocytes can relay a "learning signal" to remote synaptic sites. Results show that slow inward currents cause synchronized postsynaptic activity in remote neurons and subsequently allow Spike-Timing-Dependent Plasticity based learning to occur at the associated synapses. Secondly, that bidirectional communication between neurons and astrocytes underpins dynamic coordination between neuron clusters. Although our composite AN model is presently applied to simplified neural structures and limited to coordination between localized neurons, the principle (which embodies structural, functional and dynamic complexity, and the modeling strategy may be extended to coordination among remote neuron clusters.

  1. Roles of dopamine neurons in mediating the prediction error in aversive learning in insects.

    Science.gov (United States)

    Terao, Kanta; Mizunami, Makoto

    2017-10-31

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. The prediction error theory has been proposed to account for the finding of a blocking phenomenon, in which pairing of a stimulus X with an unconditioned stimulus (US) could block subsequent association of a second stimulus Y to the US when the two stimuli were paired in compound with the same US. Evidence for this theory, however, has been imperfect since blocking can also be accounted for by competitive theories. We recently reported blocking in classical conditioning of an odor with water reward in crickets. We also reported an "auto-blocking" phenomenon in appetitive learning, which supported the prediction error theory and rejected alternative theories. The presence of auto-blocking also suggested that octopamine neurons mediate reward prediction error signals. Here we show that blocking and auto-blocking occur in aversive learning to associate an odor with salt water (US) in crickets, and our results suggest that dopamine neurons mediate aversive prediction error signals. We conclude that the prediction error theory is applicable to both appetitive learning and aversive learning in insects.

  2. Newborn neurons in the olfactory bulb selected for long-term survival through olfactory learning are prematurely suppressed when the olfactory memory is erased.

    Science.gov (United States)

    Sultan, Sébastien; Rey, Nolwen; Sacquet, Joelle; Mandairon, Nathalie; Didier, Anne

    2011-10-19

    A role for newborn neurons in olfactory memory has been proposed based on learning-dependent modulation of olfactory bulb neurogenesis in adults. We hypothesized that if newborn neurons support memory, then they should be suppressed by memory erasure. Using an ecological approach in mice, we showed that behaviorally breaking a previously learned odor-reward association prematurely suppressed newborn neurons selected to survive during initial learning. Furthermore, intrabulbar infusions of the caspase pan-inhibitor ZVAD (benzyloxycarbonyl-Val-Ala-Asp) during the behavioral odor-reward extinction prevented newborn neurons death and erasure of the odor-reward association. Newborn neurons thus contribute to the bulbar network plasticity underlying long-term memory.

  3. Consolidation of an olfactory memory trace in the olfactory bulb is required for learning-induced survival of adult-born neurons and long-term memory.

    Directory of Open Access Journals (Sweden)

    Florence Kermen

    Full Text Available BACKGROUND: It has recently been proposed that adult-born neurons in the olfactory bulb, whose survival is modulated by learning, support long-term olfactory memory. However, the mechanism used to select which adult-born neurons following learning will participate in the long-term retention of olfactory information is unknown. We addressed this question by investigating the effect of bulbar consolidation of olfactory learning on memory and neurogenesis. METHODOLOGY/PRINCIPAL FINDINGS: Initially, we used a behavioral ecological approach using adult mice to assess the impact of consolidation on neurogenesis. Using learning paradigms in which consolidation time was varied, we showed that a spaced (across days, but not a massed (within day, learning paradigm increased survival of adult-born neurons and allowed long-term retention of the task. Subsequently, we used a pharmacological approach to block consolidation in the olfactory bulb, consisting in intrabulbar infusion of the protein synthesis inhibitor anisomycin, and found impaired learning and no increase in neurogenesis, while basic olfactory processing and the basal rate of adult-born neuron survival remained unaffected. Taken together these data indicate that survival of adult-born neurons during learning depends on consolidation processes taking place in the olfactory bulb. CONCLUSION/SIGNIFICANCE: We can thus propose a model in which consolidation processes in the olfactory bulb determine both survival of adult-born neurons and long-term olfactory memory. The finding that adult-born neuron survival during olfactory learning is governed by consolidation in the olfactory bulb strongly argues in favor of a role for bulbar adult-born neurons in supporting olfactory memory.

  4. Consolidation of an olfactory memory trace in the olfactory bulb is required for learning-induced survival of adult-born neurons and long-term memory.

    Science.gov (United States)

    Kermen, Florence; Sultan, Sébastien; Sacquet, Joëlle; Mandairon, Nathalie; Didier, Anne

    2010-08-13

    It has recently been proposed that adult-born neurons in the olfactory bulb, whose survival is modulated by learning, support long-term olfactory memory. However, the mechanism used to select which adult-born neurons following learning will participate in the long-term retention of olfactory information is unknown. We addressed this question by investigating the effect of bulbar consolidation of olfactory learning on memory and neurogenesis. Initially, we used a behavioral ecological approach using adult mice to assess the impact of consolidation on neurogenesis. Using learning paradigms in which consolidation time was varied, we showed that a spaced (across days), but not a massed (within day), learning paradigm increased survival of adult-born neurons and allowed long-term retention of the task. Subsequently, we used a pharmacological approach to block consolidation in the olfactory bulb, consisting in intrabulbar infusion of the protein synthesis inhibitor anisomycin, and found impaired learning and no increase in neurogenesis, while basic olfactory processing and the basal rate of adult-born neuron survival remained unaffected. Taken together these data indicate that survival of adult-born neurons during learning depends on consolidation processes taking place in the olfactory bulb. We can thus propose a model in which consolidation processes in the olfactory bulb determine both survival of adult-born neurons and long-term olfactory memory. The finding that adult-born neuron survival during olfactory learning is governed by consolidation in the olfactory bulb strongly argues in favor of a role for bulbar adult-born neurons in supporting olfactory memory.

  5. Associative (not Hebbian) learning and the mirror neuron system.

    Science.gov (United States)

    Cooper, Richard P; Cook, Richard; Dickinson, Anthony; Heyes, Cecilia M

    2013-04-12

    The associative sequence learning (ASL) hypothesis suggests that sensorimotor experience plays an inductive role in the development of the mirror neuron system, and that it can play this crucial role because its effects are mediated by learning that is sensitive to both contingency and contiguity. The Hebbian hypothesis proposes that sensorimotor experience plays a facilitative role, and that its effects are mediated by learning that is sensitive only to contiguity. We tested the associative and Hebbian accounts by computational modelling of automatic imitation data indicating that MNS responsivity is reduced more by contingent and signalled than by non-contingent sensorimotor training (Cook et al. [7]). Supporting the associative account, we found that the reduction in automatic imitation could be reproduced by an existing interactive activation model of imitative compatibility when augmented with Rescorla-Wagner learning, but not with Hebbian or quasi-Hebbian learning. The work argues for an associative, but against a Hebbian, account of the effect of sensorimotor training on automatic imitation. We argue, by extension, that associative learning is potentially sufficient for MNS development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. [Effect of electroacupuncture intervention on learning-memory ability and injured hippocampal neurons in depression rats].

    Science.gov (United States)

    Bao, Wu-Ye; Jiao, Shuang; Lu, Jun; Tu, Ya; Song, Ying-Zhou; Wu, Qian; A, Ying-Ge

    2014-04-01

    To observe the effect of electroacupuncture (EA) stimulation of "Baihui" (GV 20)-"Yintang" (EX-HN 3) on changes of learning-memory ability and hippocampal neuron structure in chronic stress-stimulation induced depression rats. Forty-eight SD rats were randomly divided into normal, model, EA and medication (Fluoxetine) groups, with 12 rats in each group. The depression model was established by chronic unpredictable mild stress stimulation (swimming in 4 degrees C water, fasting, water deprivation, reversed day and night, etc). Treatment was applied to "Baihui" (GV 20) and "Yintang" (EX-HN 3) for 20 min, once every day for 21 days. For rats of the medication group, Fluoxetine (3.3 mg/kg) was given by gavage (p.o.), once daily for 21 days. The learning-memory ability was detected by Morris water maze tests. The pathological and ultrastructural changes of the hippocampal tissue and neurons were assessed by H.E. staining, light microscope and transmission electron microscopy, respectively. Compared to the normal group, the rats' body weight on day 14 and day 21 after modeling was significantly decreased in the model group (P learning-memory ability. Observations of light microscope and transmission electron microscope showed that modeling induced pathological changes such as reduction in hippocampal cell layers, vague and broken cellular membrane, and ultrastructural changes of hippocampal neurons including swelling and reduction of mitochondria and mitochondrial crests were relived after EA and Fluoxetine treatment. EA intervention can improve the learning-memory ability and relieving impairment of hippocampal neurons in depression rats, which may be one of its mechanisms underlying bettering depression.

  7. Two Pairs of Mushroom Body Efferent Neurons Are Required for Appetitive Long-Term Memory Retrieval in Drosophila

    Directory of Open Access Journals (Sweden)

    Pierre-Yves Plaçais

    2013-11-01

    Full Text Available One of the challenges facing memory research is to combine network- and cellular-level descriptions of memory encoding. In this context, Drosophila offers the opportunity to decipher, down to single-cell resolution, memory-relevant circuits in connection with the mushroom bodies (MBs, prominent structures for olfactory learning and memory. Although the MB-afferent circuits involved in appetitive learning were recently described, the circuits underlying appetitive memory retrieval remain unknown. We identified two pairs of cholinergic neurons efferent from the MB α vertical lobes, named MB-V3, that are necessary for the retrieval of appetitive long-term memory (LTM. Furthermore, LTM retrieval was correlated to an enhanced response to the rewarded odor in these neurons. Strikingly, though, silencing the MB-V3 neurons did not affect short-term memory (STM retrieval. This finding supports a scheme of parallel appetitive STM and LTM processing.

  8. Bioinformatics algorithm based on a parallel implementation of a machine learning approach using transducers

    International Nuclear Information System (INIS)

    Roche-Lima, Abiel; Thulasiram, Ruppa K

    2012-01-01

    Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.

  9. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models

    Directory of Open Access Journals (Sweden)

    Alexander eHanuschkin

    2013-06-01

    Full Text Available Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: Random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, they allow for imitating arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions.Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird’s own song

  10. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models.

    Science.gov (United States)

    Hanuschkin, A; Ganguli, S; Hahnloser, R H R

    2013-01-01

    Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.

  11. Trim9 Deletion Alters the Morphogenesis of Developing and Adult-Born Hippocampal Neurons and Impairs Spatial Learning and Memory.

    Science.gov (United States)

    Winkle, Cortney C; Olsen, Reid H J; Kim, Hyojin; Moy, Sheryl S; Song, Juan; Gupton, Stephanie L

    2016-05-04

    During hippocampal development, newly born neurons migrate to appropriate destinations, extend axons, and ramify dendritic arbors to establish functional circuitry. These developmental stages are recapitulated in the dentate gyrus of the adult hippocampus, where neurons are continuously generated and subsequently incorporate into existing, local circuitry. Here we demonstrate that the E3 ubiquitin ligase TRIM9 regulates these developmental stages in embryonic and adult-born mouse hippocampal neurons in vitro and in vivo Embryonic hippocampal and adult-born dentate granule neurons lacking Trim9 exhibit several morphological defects, including excessive dendritic arborization. Although gross anatomy of the hippocampus was not detectably altered by Trim9 deletion, a significant number of Trim9(-/-) adult-born dentate neurons localized inappropriately. These morphological and localization defects of hippocampal neurons in Trim9(-/-) mice were associated with extreme deficits in spatial learning and memory, suggesting that TRIM9-directed neuronal morphogenesis may be involved in hippocampal-dependent behaviors. Appropriate generation and incorporation of adult-born neurons in the dentate gyrus are critical for spatial learning and memory and other hippocampal functions. Here we identify the brain-enriched E3 ubiquitin ligase TRIM9 as a novel regulator of embryonic and adult hippocampal neuron shape acquisition and hippocampal-dependent behaviors. Genetic deletion of Trim9 elevated dendritic arborization of hippocampal neurons in vitro and in vivo Adult-born dentate granule cells lacking Trim9 similarly exhibited excessive dendritic arborization and mislocalization of cell bodies in vivo These cellular defects were associated with severe deficits in spatial learning and memory. Copyright © 2016 the authors 0270-6474/16/364940-19$15.00/0.

  12. Reconciling genetic evolution and the associative learning account of mirror neurons through data-acquisition mechanisms.

    Science.gov (United States)

    Lotem, Arnon; Kolodny, Oren

    2014-04-01

    An associative learning account of mirror neurons should not preclude genetic evolution of its underlying mechanisms. On the contrary, an associative learning framework for cognitive development should seek heritable variation in the learning rules and in the data-acquisition mechanisms that construct associative networks, demonstrating how small genetic modifications of associative elements can give rise to the evolution of complex cognition.

  13. Histone Deacetylase (HDAC) Inhibitors - emerging roles in neuronal memory, learning, synaptic plasticity and neural regeneration.

    Science.gov (United States)

    Ganai, Shabir Ahmad; Ramadoss, Mahalakshmi; Mahadevan, Vijayalakshmi

    2016-01-01

    Epigenetic regulation of neuronal signalling through histone acetylation dictates transcription programs that govern neuronal memory, plasticity and learning paradigms. Histone Acetyl Transferases (HATs) and Histone Deacetylases (HDACs) are antagonistic enzymes that regulate gene expression through acetylation and deacetylation of histone proteins around which DNA is wrapped inside a eukaryotic cell nucleus. The epigenetic control of HDACs and the cellular imbalance between HATs and HDACs dictate disease states and have been implicated in muscular dystrophy, loss of memory, neurodegeneration and autistic disorders. Altering gene expression profiles through inhibition of HDACs is now emerging as a powerful technique in therapy. This review presents evolving applications of HDAC inhibitors as potential drugs in neurological research and therapy. Mechanisms that govern their expression profiles in neuronal signalling, plasticity and learning will be covered. Promising and exciting possibilities of HDAC inhibitors in memory formation, fear conditioning, ischemic stroke and neural regeneration have been detailed.

  14. Spiking Neural Networks with Unsupervised Learning Based on STDP Using Resistive Synaptic Devices and Analog CMOS Neuron Circuit.

    Science.gov (United States)

    Kwon, Min-Woo; Baek, Myung-Hyun; Hwang, Sungmin; Kim, Sungjun; Park, Byung-Gook

    2018-09-01

    We designed the CMOS analog integrate and fire (I&F) neuron circuit can drive resistive synaptic device. The neuron circuit consists of a current mirror for spatial integration, a capacitor for temporal integration, asymmetric negative and positive pulse generation part, a refractory part, and finally a back-propagation pulse generation part for learning of the synaptic devices. The resistive synaptic devices were fabricated using HfOx switching layer by atomic layer deposition (ALD). The resistive synaptic device had gradual set and reset characteristics and the conductance was adjusted by spike-timing-dependent-plasticity (STDP) learning rule. We carried out circuit simulation of synaptic device and CMOS neuron circuit. And we have developed an unsupervised spiking neural networks (SNNs) for 5 × 5 pattern recognition and classification using the neuron circuit and synaptic devices. The hardware-based SNNs can autonomously and efficiently control the weight updates of the synapses between neurons, without the aid of software calculations.

  15. TGF-β Signaling in Dopaminergic Neurons Regulates Dendritic Growth, Excitatory-Inhibitory Synaptic Balance, and Reversal Learning

    Directory of Open Access Journals (Sweden)

    Sarah X. Luo

    2016-12-01

    Full Text Available Neural circuits involving midbrain dopaminergic (DA neurons regulate reward and goal-directed behaviors. Although local GABAergic input is known to modulate DA circuits, the mechanism that controls excitatory/inhibitory synaptic balance in DA neurons remains unclear. Here, we show that DA neurons use autocrine transforming growth factor β (TGF-β signaling to promote the growth of axons and dendrites. Surprisingly, removing TGF-β type II receptor in DA neurons also disrupts the balance in TGF-β1 expression in DA neurons and neighboring GABAergic neurons, which increases inhibitory input, reduces excitatory synaptic input, and alters phasic firing patterns in DA neurons. Mice lacking TGF-β signaling in DA neurons are hyperactive and exhibit inflexibility in relinquishing learned behaviors and re-establishing new stimulus-reward associations. These results support a role for TGF-β in regulating the delicate balance of excitatory/inhibitory synaptic input in local microcircuits involving DA and GABAergic neurons and its potential contributions to neuropsychiatric disorders.

  16. Top-down inputs enhance orientation selectivity in neurons of the primary visual cortex during perceptual learning.

    Directory of Open Access Journals (Sweden)

    Samat Moldakarimov

    2014-08-01

    Full Text Available Perceptual learning has been used to probe the mechanisms of cortical plasticity in the adult brain. Feedback projections are ubiquitous in the cortex, but little is known about their role in cortical plasticity. Here we explore the hypothesis that learning visual orientation discrimination involves learning-dependent plasticity of top-down feedback inputs from higher cortical areas, serving a different function from plasticity due to changes in recurrent connections within a cortical area. In a Hodgkin-Huxley-based spiking neural network model of visual cortex, we show that modulation of feedback inputs to V1 from higher cortical areas results in shunting inhibition in V1 neurons, which changes the response properties of V1 neurons. The orientation selectivity of V1 neurons is enhanced without changing orientation preference, preserving the topographic organizations in V1. These results provide new insights to the mechanisms of plasticity in the adult brain, reconciling apparently inconsistent experiments and providing a new hypothesis for a functional role of the feedback connections.

  17. Learning intrinsic excitability in medium spiny neurons [v2; ref status: indexed, http://f1000r.es/30b

    Directory of Open Access Journals (Sweden)

    Gabriele Scheler

    2014-02-01

    Full Text Available We present an unsupervised, local activation-dependent learning rule for intrinsic plasticity (IP which affects the composition of ion channel conductances for single neurons in a use-dependent way. We use a single-compartment conductance-based model for medium spiny striatal neurons in order to show the effects of parameterization of individual ion channels on the neuronal membrane potential-curent relationship (activation function. We show that parameter changes within the physiological ranges are sufficient to create an ensemble of neurons with significantly different activation functions. We emphasize that the effects of intrinsic neuronal modulation on spiking behavior require a distributed mode of synaptic input and can be eliminated by strongly correlated input. We show how modulation and adaptivity in ion channel conductances can be utilized to store patterns without an additional contribution by synaptic plasticity (SP. The adaptation of the spike response may result in either "positive" or "negative" pattern learning. However, read-out of stored information depends on a distributed pattern of synaptic activity to let intrinsic modulation determine spike response. We briefly discuss the implications of this conditional memory on learning and addiction.

  18. Compromised NMDA/Glutamate Receptor Expression in Dopaminergic Neurons Impairs Instrumental Learning, But Not Pavlovian Goal Tracking or Sign Tracking

    Science.gov (United States)

    James, Alex S; Pennington, Zachary T; Tran, Phu; Jentsch, James David

    2015-01-01

    Two theories regarding the role for dopamine neurons in learning include the concepts that their activity serves as a (1) mechanism that confers incentive salience onto rewards and associated cues and/or (2) contingency teaching signal reflecting reward prediction error. While both theories are provocative, the causal role for dopamine cell activity in either mechanism remains controversial. In this study mice that either fully or partially lacked NMDARs in dopamine neurons exclusively, as well as appropriate controls, were evaluated for reward-related learning; this experimental design allowed for a test of the premise that NMDA/glutamate receptor (NMDAR)-mediated mechanisms in dopamine neurons, including NMDA-dependent regulation of phasic discharge activity of these cells, modulate either the instrumental learning processes or the likelihood of pavlovian cues to become highly motivating incentive stimuli that directly attract behavior. Loss of NMDARs in dopamine neurons did not significantly affect baseline dopamine utilization in the striatum, novelty evoked locomotor behavior, or consumption of a freely available, palatable food solution. On the other hand, animals lacking NMDARs in dopamine cells exhibited a selective reduction in reinforced lever responses that emerged over the course of instrumental learning. Loss of receptor expression did not, however, influence the likelihood of an animal acquiring a pavlovian conditional response associated with attribution of incentive salience to reward-paired cues (sign tracking). These data support the view that reductions in NMDAR signaling in dopamine neurons affect instrumental reward-related learning but do not lend support to hypotheses that suggest that the behavioral significance of this signaling includes incentive salience attribution.

  19. The mirror-neuron system.

    Science.gov (United States)

    Rizzolatti, Giacomo; Craighero, Laila

    2004-01-01

    A category of stimuli of great importance for primates, humans in particular, is that formed by actions done by other individuals. If we want to survive, we must understand the actions of others. Furthermore, without action understanding, social organization is impossible. In the case of humans, there is another faculty that depends on the observation of others' actions: imitation learning. Unlike most species, we are able to learn by imitation, and this faculty is at the basis of human culture. In this review we present data on a neurophysiological mechanism--the mirror-neuron mechanism--that appears to play a fundamental role in both action understanding and imitation. We describe first the functional properties of mirror neurons in monkeys. We review next the characteristics of the mirror-neuron system in humans. We stress, in particular, those properties specific to the human mirror-neuron system that might explain the human capacity to learn by imitation. We conclude by discussing the relationship between the mirror-neuron system and language.

  20. The mirror-neuron system and observational learning: Implications for the effectiveness of dynamic visualizations.

    OpenAIRE

    Van Gog, Tamara; Paas, Fred; Marcus, Nadine; Ayres, Paul; Sweller, John

    2009-01-01

    Van Gog, T., Paas, F., Marcus, N., Ayres, P., & Sweller, J. (2009). The mirror-neuron system and observational learning: Implications for the effectiveness of dynamic visualizations. Educational Psychology Review, 21, 21-30.

  1. Reinforcement learning using a continuous time actor-critic framework with spiking neurons.

    Directory of Open Access Journals (Sweden)

    Nicolas Frémaux

    2013-04-01

    Full Text Available Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD learning of Doya (2000 to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.

  2. Artificial neuron-glia networks learning approach based on cooperative coevolution.

    Science.gov (United States)

    Mesejo, Pablo; Ibáñez, Oscar; Fernández-Blanco, Enrique; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana B

    2015-06-01

    Artificial Neuron-Glia Networks (ANGNs) are a novel bio-inspired machine learning approach. They extend classical Artificial Neural Networks (ANNs) by incorporating recent findings and suppositions about the way information is processed by neural and astrocytic networks in the most evolved living organisms. Although ANGNs are not a consolidated method, their performance against the traditional approach, i.e. without artificial astrocytes, was already demonstrated on classification problems. However, the corresponding learning algorithms developed so far strongly depends on a set of glial parameters which are manually tuned for each specific problem. As a consequence, previous experimental tests have to be done in order to determine an adequate set of values, making such manual parameter configuration time-consuming, error-prone, biased and problem dependent. Thus, in this paper, we propose a novel learning approach for ANGNs that fully automates the learning process, and gives the possibility of testing any kind of reasonable parameter configuration for each specific problem. This new learning algorithm, based on coevolutionary genetic algorithms, is able to properly learn all the ANGNs parameters. Its performance is tested on five classification problems achieving significantly better results than ANGN and competitive results with ANN approaches.

  3. Barrier Function-Based Neural Adaptive Control With Locally Weighted Learning and Finite Neuron Self-Growing Strategy.

    Science.gov (United States)

    Jia, Zi-Jun; Song, Yong-Duan

    2017-06-01

    This paper presents a new approach to construct neural adaptive control for uncertain nonaffine systems. By integrating locally weighted learning with barrier Lyapunov function (BLF), a novel control design method is presented to systematically address the two critical issues in neural network (NN) control field: one is how to fulfill the compact set precondition for NN approximation, and the other is how to use varying rather than a fixed NN structure to improve the functionality of NN control. A BLF is exploited to ensure the NN inputs to remain bounded during the entire system operation. To account for system nonlinearities, a neuron self-growing strategy is proposed to guide the process for adding new neurons to the system, resulting in a self-adjustable NN structure for better learning capabilities. It is shown that the number of neurons needed to accomplish the control task is finite, and better performance can be obtained with less number of neurons as compared with traditional methods. The salient feature of the proposed method also lies in the continuity of the control action everywhere. Furthermore, the resulting control action is smooth almost everywhere except for a few time instants at which new neurons are added. Numerical example illustrates the effectiveness of the proposed approach.

  4. Functional architecture of reward learning in mushroom body extrinsic neurons of larval Drosophila.

    Science.gov (United States)

    Saumweber, Timo; Rohwedder, Astrid; Schleyer, Michael; Eichler, Katharina; Chen, Yi-Chun; Aso, Yoshinori; Cardona, Albert; Eschbach, Claire; Kobler, Oliver; Voigt, Anne; Durairaja, Archana; Mancini, Nino; Zlatic, Marta; Truman, James W; Thum, Andreas S; Gerber, Bertram

    2018-03-16

    The brain adaptively integrates present sensory input, past experience, and options for future action. The insect mushroom body exemplifies how a central brain structure brings about such integration. Here we use a combination of systematic single-cell labeling, connectomics, transgenic silencing, and activation experiments to study the mushroom body at single-cell resolution, focusing on the behavioral architecture of its input and output neurons (MBINs and MBONs), and of the mushroom body intrinsic APL neuron. Our results reveal the identity and morphology of almost all of these 44 neurons in stage 3 Drosophila larvae. Upon an initial screen, functional analyses focusing on the mushroom body medial lobe uncover sparse and specific functions of its dopaminergic MBINs, its MBONs, and of the GABAergic APL neuron across three behavioral tasks, namely odor preference, taste preference, and associative learning between odor and taste. Our results thus provide a cellular-resolution study case of how brains organize behavior.

  5. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses.

    Science.gov (United States)

    Qiao, Ning; Mostafa, Hesham; Corradi, Federico; Osswald, Marc; Stefanini, Fabio; Sumislawska, Dora; Indiveri, Giacomo

    2015-01-01

    Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.

  6. Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception.

    Science.gov (United States)

    Kutschireiter, Anna; Surace, Simone Carlo; Sprekeler, Henning; Pfister, Jean-Pascal

    2017-08-18

    The robust estimation of dynamical hidden features, such as the position of prey, based on sensory inputs is one of the hallmarks of perception. This dynamical estimation can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals' performance in many tasks is consistent with such a Bayesian statistical interpretation. However, it is presently unclear how a nonlinear Bayesian filter can be efficiently implemented in a network of neurons that satisfies some minimum constraints of biological plausibility. Here, we propose the Neural Particle Filter (NPF), a sampling-based nonlinear Bayesian filter, which does not rely on importance weights. We show that this filter can be interpreted as the neuronal dynamics of a recurrently connected rate-based neural network receiving feed-forward input from sensory neurons. Further, it captures properties of temporal and multi-sensory integration that are crucial for perception, and it allows for online parameter learning with a maximum likelihood approach. The NPF holds the promise to avoid the 'curse of dimensionality', and we demonstrate numerically its capability to outperform weighted particle filters in higher dimensions and when the number of particles is limited.

  7. Mlifdect: Android Malware Detection Based on Parallel Machine Learning and Information Fusion

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2017-01-01

    Full Text Available In recent years, Android malware has continued to grow at an alarming rate. More recent malicious apps’ employing highly sophisticated detection avoidance techniques makes the traditional machine learning based malware detection methods far less effective. More specifically, they cannot cope with various types of Android malware and have limitation in detection by utilizing a single classification algorithm. To address this limitation, we propose a novel approach in this paper that leverages parallel machine learning and information fusion techniques for better Android malware detection, which is named Mlifdect. To implement this approach, we first extract eight types of features from static analysis on Android apps and build two kinds of feature sets after feature selection. Then, a parallel machine learning detection model is developed for speeding up the process of classification. Finally, we investigate the probability analysis based and Dempster-Shafer theory based information fusion approaches which can effectively obtain the detection results. To validate our method, other state-of-the-art detection works are selected for comparison with real-world Android apps. The experimental results demonstrate that Mlifdect is capable of achieving higher detection accuracy as well as a remarkable run-time efficiency compared to the existing malware detection solutions.

  8. Associative learning alone is insufficient for the evolution and maintenance of the human mirror neuron system.

    Science.gov (United States)

    Oberman, Lindsay M; Hubbard, Edward M; McCleery, Joseph P

    2014-04-01

    Cook et al. argue that mirror neurons originate from associative learning processes, without evolutionary influence from social-cognitive mechanisms. We disagree with this claim and present arguments based upon cross-species comparisons, EEG findings, and developmental neuroscience that the evolution of mirror neurons is most likely driven simultaneously and interactively by evolutionarily adaptive psychological mechanisms and lower-level biological mechanisms that support them.

  9. Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.

    Directory of Open Access Journals (Sweden)

    George L Chadderdon

    Full Text Available Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1, no learning (0, or punishment (-1, corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.

  10. Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.

    Science.gov (United States)

    Chadderdon, George L; Neymotin, Samuel A; Kerr, Cliff C; Lytton, William W

    2012-01-01

    Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1), no learning (0), or punishment (-1), corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.

  11. Model-based iterative learning control of Parkinsonian state in thalamic relay neuron

    Science.gov (United States)

    Liu, Chen; Wang, Jiang; Li, Huiyan; Xue, Zhiqin; Deng, Bin; Wei, Xile

    2014-09-01

    Although the beneficial effects of chronic deep brain stimulation on Parkinson's disease motor symptoms are now largely confirmed, the underlying mechanisms behind deep brain stimulation remain unclear and under debate. Hence, the selection of stimulation parameters is full of challenges. Additionally, due to the complexity of neural system, together with omnipresent noises, the accurate model of thalamic relay neuron is unknown. Thus, the iterative learning control of the thalamic relay neuron's Parkinsonian state based on various variables is presented. Combining the iterative learning control with typical proportional-integral control algorithm, a novel and efficient control strategy is proposed, which does not require any particular knowledge on the detailed physiological characteristics of cortico-basal ganglia-thalamocortical loop and can automatically adjust the stimulation parameters. Simulation results demonstrate the feasibility of the proposed control strategy to restore the fidelity of thalamic relay in the Parkinsonian condition. Furthermore, through changing the important parameter—the maximum ionic conductance densities of low-threshold calcium current, the dominant characteristic of the proposed method which is independent of the accurate model can be further verified.

  12. Autism and the mirror neuron system: insights from learning and teaching.

    Science.gov (United States)

    Vivanti, Giacomo; Rogers, Sally J

    2014-01-01

    Individuals with autism have difficulties in social learning domains which typically involve mirror neuron system (MNS) activation. However, the precise role of the MNS in the development of autism and its relevance to treatment remain unclear. In this paper, we argue that three distinct aspects of social learning are critical for advancing knowledge in this area: (i) the mechanisms that allow for the implicit mapping of and learning from others' behaviour, (ii) the motivation to attend to and model conspecifics and (iii) the flexible and selective use of social learning. These factors are key targets of the Early Start Denver Model, an autism treatment approach which emphasizes social imitation, dyadic engagement, verbal and non-verbal communication and affect sharing. Analysis of the developmental processes and treatment-related changes in these different aspects of social learning in autism can shed light on the nature of the neuropsychological mechanisms underlying social learning and positive treatment outcomes in autism. This knowledge in turn may assist in developing more successful pedagogic approaches to autism spectrum disorder. Thus, intervention research can inform the debate on relations among neuropsychology of social learning, the role of the MNS, and educational practice in autism.

  13. Autism and the mirror neuron system: insights from learning and teaching

    Science.gov (United States)

    Vivanti, Giacomo; Rogers, Sally J.

    2014-01-01

    Individuals with autism have difficulties in social learning domains which typically involve mirror neuron system (MNS) activation. However, the precise role of the MNS in the development of autism and its relevance to treatment remain unclear. In this paper, we argue that three distinct aspects of social learning are critical for advancing knowledge in this area: (i) the mechanisms that allow for the implicit mapping of and learning from others' behaviour, (ii) the motivation to attend to and model conspecifics and (iii) the flexible and selective use of social learning. These factors are key targets of the Early Start Denver Model, an autism treatment approach which emphasizes social imitation, dyadic engagement, verbal and non-verbal communication and affect sharing. Analysis of the developmental processes and treatment-related changes in these different aspects of social learning in autism can shed light on the nature of the neuropsychological mechanisms underlying social learning and positive treatment outcomes in autism. This knowledge in turn may assist in developing more successful pedagogic approaches to autism spectrum disorder. Thus, intervention research can inform the debate on relations among neuropsychology of social learning, the role of the MNS, and educational practice in autism. PMID:24778379

  14. Aging in Sensory and Motor Neurons Results in Learning Failure in Aplysia californica.

    Directory of Open Access Journals (Sweden)

    Andrew T Kempsell

    Full Text Available The physiological and molecular mechanisms of age-related memory loss are complicated by the complexity of vertebrate nervous systems. This study takes advantage of a simple neural model to investigate nervous system aging, focusing on changes in learning and memory in the form of behavioral sensitization in vivo and synaptic facilitation in vitro. The effect of aging on the tail withdrawal reflex (TWR was studied in Aplysia californica at maturity and late in the annual lifecycle. We found that short-term sensitization in TWR was absent in aged Aplysia. This implied that the neuronal machinery governing nonassociative learning was compromised during aging. Synaptic plasticity in the form of short-term facilitation between tail sensory and motor neurons decreased during aging whether the sensitizing stimulus was tail shock or the heterosynaptic modulator serotonin (5-HT. Together, these results suggest that the cellular mechanisms governing behavioral sensitization are compromised during aging, thereby nearly eliminating sensitization in aged Aplysia.

  15. Mirror neurons: from origin to function.

    Science.gov (United States)

    Cook, Richard; Bird, Geoffrey; Catmur, Caroline; Press, Clare; Heyes, Cecilia

    2014-04-01

    This article argues that mirror neurons originate in sensorimotor associative learning and therefore a new approach is needed to investigate their functions. Mirror neurons were discovered about 20 years ago in the monkey brain, and there is now evidence that they are also present in the human brain. The intriguing feature of many mirror neurons is that they fire not only when the animal is performing an action, such as grasping an object using a power grip, but also when the animal passively observes a similar action performed by another agent. It is widely believed that mirror neurons are a genetic adaptation for action understanding; that they were designed by evolution to fulfill a specific socio-cognitive function. In contrast, we argue that mirror neurons are forged by domain-general processes of associative learning in the course of individual development, and, although they may have psychological functions, they do not necessarily have a specific evolutionary purpose or adaptive function. The evidence supporting this view shows that (1) mirror neurons do not consistently encode action "goals"; (2) the contingency- and context-sensitive nature of associative learning explains the full range of mirror neuron properties; (3) human infants receive enough sensorimotor experience to support associative learning of mirror neurons ("wealth of the stimulus"); and (4) mirror neurons can be changed in radical ways by sensorimotor training. The associative account implies that reliable information about the function of mirror neurons can be obtained only by research based on developmental history, system-level theory, and careful experimentation.

  16. The ENU-3 protein family members function in the Wnt pathway parallel to UNC-6/Netrin to promote motor neuron axon outgrowth in C. elegans.

    Science.gov (United States)

    Florica, Roxana Oriana; Hipolito, Victoria; Bautista, Stephen; Anvari, Homa; Rapp, Chloe; El-Rass, Suzan; Asgharian, Alimohammad; Antonescu, Costin N; Killeen, Marie T

    2017-10-01

    The axons of the DA and DB classes of motor neurons fail to reach the dorsal cord in the absence of the guidance cue UNC-6/Netrin or its receptor UNC-5 in C. elegans. However, the axonal processes usually exit their cell bodies in the ventral cord in the absence of both molecules. Strains lacking functional versions of UNC-6 or UNC-5 have a low level of DA and DB motor neuron axon outgrowth defects. We found that mutations in the genes for all six of the ENU-3 proteins function to enhance the outgrowth defects of the DA and DB axons in strains lacking either UNC-6 or UNC-5. A mutation in the gene for the MIG-14/Wntless protein also enhances defects in a strain lacking either UNC-5 or UNC-6, suggesting that the ENU-3 and Wnt pathways function parallel to the Netrin pathway in directing motor neuron axon outgrowth. Our evidence suggests that the ENU-3 proteins are novel members of the Wnt pathway in nematodes. Five of the six members of the ENU-3 family are predicted to be single-pass trans-membrane proteins. The expression pattern of ENU-3.1 was consistent with plasma membrane localization. One family member, ENU-3.6, lacks the predicted signal peptide and the membrane-spanning domain. In HeLa cells ENU-3.6 had a cytoplasmic localization and caused actin dependent processes to appear. We conclude that the ENU-3 family proteins function in a pathway parallel to the UNC-6/Netrin pathway for motor neuron axon outgrowth, most likely in the Wnt pathway. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Refinement of learned skilled movement representation in motor cortex deep output layer

    Science.gov (United States)

    Li, Qian; Ko, Ho; Qian, Zhong-Ming; Yan, Leo Y. C.; Chan, Danny C. W.; Arbuthnott, Gordon; Ke, Ya; Yung, Wing-Ho

    2017-01-01

    The mechanisms underlying the emergence of learned motor skill representation in primary motor cortex (M1) are not well understood. Specifically, how motor representation in the deep output layer 5b (L5b) is shaped by motor learning remains virtually unknown. In rats undergoing motor skill training, we detect a subpopulation of task-recruited L5b neurons that not only become more movement-encoding, but their activities are also more structured and temporally aligned to motor execution with a timescale of refinement in tens-of-milliseconds. Field potentials evoked at L5b in vivo exhibit persistent long-term potentiation (LTP) that parallels motor performance. Intracortical dopamine denervation impairs motor learning, and disrupts the LTP profile as well as the emergent neurodynamical properties of task-recruited L5b neurons. Thus, dopamine-dependent recruitment of L5b neuronal ensembles via synaptic reorganization may allow the motor cortex to generate more temporally structured, movement-encoding output signal from M1 to downstream circuitry that drives increased uniformity and precision of movement during motor learning. PMID:28598433

  18. A Re-configurable On-line Learning Spiking Neuromorphic Processor comprising 256 neurons and 128K synapses

    Directory of Open Access Journals (Sweden)

    Ning eQiao

    2015-04-01

    Full Text Available Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm 2 , and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.

  19. Learning Joint-Sparse Codes for Calibration-Free Parallel MR Imaging.

    Science.gov (United States)

    Wang, Shanshan; Tan, Sha; Gao, Yuan; Liu, Qiegen; Ying, Leslie; Xiao, Taohui; Liu, Yuanyuan; Liu, Xin; Zheng, Hairong; Liang, Dong

    2018-01-01

    The integration of compressed sensing and parallel imaging (CS-PI) has shown an increased popularity in recent years to accelerate magnetic resonance (MR) imaging. Among them, calibration-free techniques have presented encouraging performances due to its capability in robustly handling the sensitivity information. Unfortunately, existing calibration-free methods have only explored joint-sparsity with direct analysis transform projections. To further exploit joint-sparsity and improve reconstruction accuracy, this paper proposes to Learn joINt-sparse coDes for caliBration-free parallEl mR imaGing (LINDBERG) by modeling the parallel MR imaging problem as an - - minimization objective with an norm constraining data fidelity, Frobenius norm enforcing sparse representation error and the mixed norm triggering joint sparsity across multichannels. A corresponding algorithm has been developed to alternatively update the sparse representation, sensitivity encoded images and K-space data. Then, the final image is produced as the square root of sum of squares of all channel images. Experimental results on both physical phantom and in vivo data sets show that the proposed method is comparable and even superior to state-of-the-art CS-PI reconstruction approaches. Specifically, LINDBERG has presented strong capability in suppressing noise and artifacts while reconstructing MR images from highly undersampled multichannel measurements.

  20. Reward-dependent learning in neuronal networks for planning and decision making.

    Science.gov (United States)

    Dehaene, S; Changeux, J P

    2000-01-01

    Neuronal network models have been proposed for the organization of evaluation and decision processes in prefrontal circuitry and their putative neuronal and molecular bases. The models all include an implementation and simulation of an elementary reward mechanism. Their central hypothesis is that tentative rules of behavior, which are coded by clusters of active neurons in prefrontal cortex, are selected or rejected based on an evaluation by this reward signal, which may be conveyed, for instance, by the mesencephalic dopaminergic neurons with which the prefrontal cortex is densely interconnected. At the molecular level, the reward signal is postulated to be a neurotransmitter such as dopamine, which exerts a global modulatory action on prefrontal synaptic efficacies, either via volume transmission or via targeted synaptic triads. Negative reinforcement has the effect of destabilizing the currently active rule-coding clusters; subsequently, spontaneous activity varies again from one cluster to another, giving the organism the chance to discover and learn a new rule. Thus, reward signals function as effective selection signals that either maintain or suppress currently active prefrontal representations as a function of their current adequacy. Simulations of this variation-selection have successfully accounted for the main features of several major tasks that depend on prefrontal cortex integrity, such as the delayed-response test, the Wisconsin card sorting test, the Tower of London test and the Stroop test. For the more complex tasks, we have found it necessary to supplement the external reward input with a second mechanism that supplies an internal reward; it consists of an auto-evaluation loop which short-circuits the reward input from the exterior. This allows for an internal evaluation of covert motor intentions without actualizing them as behaviors, by simply testing them covertly by comparison with memorized former experiences. This element of architecture

  1. Multiplexed Neurochemical Signaling by Neurons of the Ventral Tegmental Area

    Science.gov (United States)

    Barker, David J.; Root, David H.; Zhang, Shiliang; Morales, Marisela

    2016-01-01

    The ventral tegmental area (VTA) is an evolutionarily conserved structure that has roles in reward-seeking, safety-seeking, learning, motivation, and neuropsychiatric disorders such as addiction and depression. The involvement of the VTA in these various behaviors and disorders is paralleled by its diverse signaling mechanisms. Here we review recent advances in our understanding of neuronal diversity in the VTA with a focus on cell phenotypes that participate in ‘multiplexed’ neurotransmission involving distinct signaling mechanisms. First, we describe the cellular diversity within the VTA, including neurons capable of transmitting dopamine, glutamate or GABA as well as neurons capable of multiplexing combinations of these neurotransmitters. Next, we describe the complex synaptic architecture used by VTA neurons in order to accommodate the transmission of multiple transmitters. We specifically cover recent findings showing that VTA multiplexed neurotransmission may be mediated by either the segregation of dopamine and glutamate into distinct microdomains within a single axon or by the integration of glutamate and GABA into a single axon terminal. In addition, we discuss our current understanding of the functional role that these multiplexed signaling pathways have in the lateral habenula and the nucleus accumbens. Finally, we consider the putative roles of VTA multiplexed neurotransmission in synaptic plasticity and discuss how changes in VTA multiplexed neurons may relate to various psychopathologies including drug addiction and depression. PMID:26763116

  2. Mirror neuron system and observational learning: behavioral and neurophysiological evidence.

    Science.gov (United States)

    Lago-Rodriguez, Angel; Lopez-Alonso, Virginia; Fernández-del-Olmo, Miguel

    2013-07-01

    Three experiments were performed to study observational learning using behavioral, perceptual, and neurophysiological data. Experiment 1 investigated whether observing an execution model, during physical practice of a transitive task that only presented one execution strategy, led to performance improvements compared with physical practice alone. Experiment 2 investigated whether performing an observational learning protocol improves subjects' action perception. In experiment 3 we evaluated whether the type of practice performed determined the activation of the Mirror Neuron System during action observation. Results showed that, compared with physical practice, observing an execution model during a task that only showed one execution strategy does not provide behavioral benefits. However, an observational learning protocol allows subjects to predict more precisely the outcome of the learned task. Finally, intersperse observation of an execution model with physical practice results in changes of primary motor cortex activity during the observation of the motor pattern previously practiced, whereas modulations in the connectivity between primary and non primary motor areas (PMv-M1; PPC-M1) were not affected by the practice protocol performed by the observer. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. The Stressed Female Brain: Neuronal activity in the prelimbic but not infralimbic region of the medial prefrontal cortex suppresses learning after acute stress

    Directory of Open Access Journals (Sweden)

    Lisa Y. Maeng

    2013-12-01

    Full Text Available Women are nearly twice as likely as men to suffer from anxiety and post-traumatic stress disorder (PTSD, indicating that many females are especially vulnerable to stressful life experience. A profound sex difference in the response to stress is also observed in laboratory animals. Acute exposure to an uncontrollable stressful event disrupts associative learning during classical eyeblink conditioning in female rats but enhances this same type of learning process in males. These sex differences in response to stress are dependent on neuronal activity in similar but also different brain regions. Neuronal activity in the basolateral nucleus of the amygdala (BLA is necessary in both males and females. However, neuronal activity in the medial prefrontal cortex (mPFC during the stressor is necessary to modify learning in females but not in males. The mPFC is often divided into its prelimbic (PL and infralimbic (IL subregions, which differ both in structure and function. Through its connections to the BLA, we hypothesized that neuronal activity within the PL, but not IL, during the stressor is necessary to suppress learning in females. To test this hypothesis, either the PL or IL of adult female rats was bilaterally inactivated with GABAA agonist muscimol during acute inescapable swim stress. 24h later, all subjects were trained with classical eyeblink conditioning. Though stressed, females without neuronal activity in the PL learned well. In contrast, females with IL inactivation during the stressor did not learn well, behaving similar to stressed vehicle-treated females. These data suggest that exposure to a stressful event critically engages the PL, but not IL, to disrupt associative learning in females. Together with previous studies, these data indicate that the PL communicates with the BLA to suppress learning after a stressful experience in females. This circuit may be similarly engaged in women who become cognitively impaired after stressful

  4. Fully parallel write/read in resistive synaptic array for accelerating on-chip learning

    Science.gov (United States)

    Gao, Ligang; Wang, I.-Ting; Chen, Pai-Yu; Vrudhula, Sarma; Seo, Jae-sun; Cao, Yu; Hou, Tuo-Hung; Yu, Shimeng

    2015-11-01

    A neuro-inspired computing paradigm beyond the von Neumann architecture is emerging and it generally takes advantage of massive parallelism and is aimed at complex tasks that involve intelligence and learning. The cross-point array architecture with synaptic devices has been proposed for on-chip implementation of the weighted sum and weight update in the learning algorithms. In this work, forming-free, silicon-process-compatible Ta/TaO x /TiO2/Ti synaptic devices are fabricated, in which >200 levels of conductance states could be continuously tuned by identical programming pulses. In order to demonstrate the advantages of parallelism of the cross-point array architecture, a novel fully parallel write scheme is designed and experimentally demonstrated in a small-scale crossbar array to accelerate the weight update in the training process, at a speed that is independent of the array size. Compared to the conventional row-by-row write scheme, it achieves >30× speed-up and >30× improvement in energy efficiency as projected in a large-scale array. If realistic synaptic device characteristics such as device variations are taken into an array-level simulation, the proposed array architecture is able to achieve ∼95% recognition accuracy of MNIST handwritten digits, which is close to the accuracy achieved by software using the ideal sparse coding algorithm.

  5. Fully parallel write/read in resistive synaptic array for accelerating on-chip learning

    International Nuclear Information System (INIS)

    Gao, Ligang; Chen, Pai-Yu; Seo, Jae-sun; Cao, Yu; Yu, Shimeng; Wang, I-Ting; Hou, Tuo-Hung; Vrudhula, Sarma

    2015-01-01

    A neuro-inspired computing paradigm beyond the von Neumann architecture is emerging and it generally takes advantage of massive parallelism and is aimed at complex tasks that involve intelligence and learning. The cross-point array architecture with synaptic devices has been proposed for on-chip implementation of the weighted sum and weight update in the learning algorithms. In this work, forming-free, silicon-process-compatible Ta/TaO_x/TiO_2/Ti synaptic devices are fabricated, in which >200 levels of conductance states could be continuously tuned by identical programming pulses. In order to demonstrate the advantages of parallelism of the cross-point array architecture, a novel fully parallel write scheme is designed and experimentally demonstrated in a small-scale crossbar array to accelerate the weight update in the training process, at a speed that is independent of the array size. Compared to the conventional row-by-row write scheme, it achieves >30× speed-up and >30× improvement in energy efficiency as projected in a large-scale array. If realistic synaptic device characteristics such as device variations are taken into an array-level simulation, the proposed array architecture is able to achieve ∼95% recognition accuracy of MNIST handwritten digits, which is close to the accuracy achieved by software using the ideal sparse coding algorithm. (paper)

  6. Machine Learning and Parallelism in the Reconstruction of LHCb and its Upgrade

    Science.gov (United States)

    De Cian, Michel

    2016-11-01

    The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on reconstructing decays of c- and b-hadrons. For Run II of the LHC, a new trigger strategy with a real-time reconstruction, alignment and calibration was employed. This was made possible by implementing an offline-like track reconstruction in the high level trigger. However, the ever increasing need for a higher throughput and the move to parallelism in the CPU architectures in the last years necessitated the use of vectorization techniques to achieve the desired speed and a more extensive use of machine learning to veto bad events early on. This document discusses selected improvements in computationally expensive parts of the track reconstruction, like the Kalman filter, as well as an improved approach to get rid of fake tracks using fast machine learning techniques. In the last part, a short overview of the track reconstruction challenges for the upgrade of LHCb, is given. Running a fully software-based trigger, a large gain in speed in the reconstruction has to be achieved to cope with the 40 MHz bunch-crossing rate. Two possible approaches for techniques exploiting massive parallelization are discussed.

  7. Machine Learning and Parallelism in the Reconstruction of LHCb and its Upgrade

    International Nuclear Information System (INIS)

    Cian, Michel De

    2016-01-01

    The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on reconstructing decays of c- and b-hadrons. For Run II of the LHC, a new trigger strategy with a real-time reconstruction, alignment and calibration was employed. This was made possible by implementing an offline-like track reconstruction in the high level trigger. However, the ever increasing need for a higher throughput and the move to parallelism in the CPU architectures in the last years necessitated the use of vectorization techniques to achieve the desired speed and a more extensive use of machine learning to veto bad events early on. This document discusses selected improvements in computationally expensive parts of the track reconstruction, like the Kalman filter, as well as an improved approach to get rid of fake tracks using fast machine learning techniques. In the last part, a short overview of the track reconstruction challenges for the upgrade of LHCb, is given. Running a fully software-based trigger, a large gain in speed in the reconstruction has to be achieved to cope with the 40 MHz bunch-crossing rate. Two possible approaches for techniques exploiting massive parallelization are discussed

  8. Local-learning-based neuron selection for grasping gesture prediction in motor brain machine interfaces

    Science.gov (United States)

    Xu, Kai; Wang, Yiwen; Wang, Yueming; Wang, Fang; Hao, Yaoyao; Zhang, Shaomin; Zhang, Qiaosheng; Chen, Weidong; Zheng, Xiaoxiang

    2013-04-01

    Objective. The high-dimensional neural recordings bring computational challenges to movement decoding in motor brain machine interfaces (mBMI), especially for portable applications. However, not all recorded neural activities relate to the execution of a certain movement task. This paper proposes to use a local-learning-based method to perform neuron selection for the gesture prediction in a reaching and grasping task. Approach. Nonlinear neural activities are decomposed into a set of linear ones in a weighted feature space. A margin is defined to measure the distance between inter-class and intra-class neural patterns. The weights, reflecting the importance of neurons, are obtained by minimizing a margin-based exponential error function. To find the most dominant neurons in the task, 1-norm regularization is introduced to the objective function for sparse weights, where near-zero weights indicate irrelevant neurons. Main results. The signals of only 10 neurons out of 70 selected by the proposed method could achieve over 95% of the full recording's decoding accuracy of gesture predictions, no matter which different decoding methods are used (support vector machine and K-nearest neighbor). The temporal activities of the selected neurons show visually distinguishable patterns associated with various hand states. Compared with other algorithms, the proposed method can better eliminate the irrelevant neurons with near-zero weights and provides the important neuron subset with the best decoding performance in statistics. The weights of important neurons converge usually within 10-20 iterations. In addition, we study the temporal and spatial variation of neuron importance along a period of one and a half months in the same task. A high decoding performance can be maintained by updating the neuron subset. Significance. The proposed algorithm effectively ascertains the neuronal importance without assuming any coding model and provides a high performance with different

  9. Superior Generalization Capability of Hardware-Learing Algorithm Developed for Self-Learning Neuron-MOS Neural Networks

    Science.gov (United States)

    Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro

    1995-02-01

    We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.

  10. The R package "sperrorest" : Parallelized spatial error estimation and variable importance assessment for geospatial machine learning

    Science.gov (United States)

    Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander

    2017-04-01

    Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the

  11. Short- and long-term memory in Drosophila require cAMP signaling in distinct neuron types.

    Science.gov (United States)

    Blum, Allison L; Li, Wanhe; Cressy, Mike; Dubnau, Josh

    2009-08-25

    A common feature of memory and its underlying synaptic plasticity is that each can be dissected into short-lived forms involving modification or trafficking of existing proteins and long-term forms that require new gene expression. An underlying assumption of this cellular view of memory consolidation is that these different mechanisms occur within a single neuron. At the neuroanatomical level, however, different temporal stages of memory can engage distinct neural circuits, a notion that has not been conceptually integrated with the cellular view. Here, we investigated this issue in the context of aversive Pavlovian olfactory memory in Drosophila. Previous studies have demonstrated a central role for cAMP signaling in the mushroom body (MB). The Ca(2+)-responsive adenylyl cyclase RUTABAGA is believed to be a coincidence detector in gamma neurons, one of the three principle classes of MB Kenyon cells. We were able to separately restore short-term or long-term memory to a rutabaga mutant with expression of rutabaga in different subsets of MB neurons. Our findings suggest a model in which the learning experience initiates two parallel associations: a short-lived trace in MB gamma neurons, and a long-lived trace in alpha/beta neurons.

  12. PBODL : Parallel Bayesian Online Deep Learning for Click-Through Rate Prediction in Tencent Advertising System

    OpenAIRE

    Liu, Xun; Xue, Wei; Xiao, Lei; Zhang, Bo

    2017-01-01

    We describe a parallel bayesian online deep learning framework (PBODL) for click-through rate (CTR) prediction within today's Tencent advertising system, which provides quick and accurate learning of user preferences. We first explain the framework with a deep probit regression model, which is trained with probabilistic back-propagation in the mode of assumed Gaussian density filtering. Then we extend the model family to a variety of bayesian online models with increasing feature embedding ca...

  13. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy.

    Science.gov (United States)

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic-there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions.

  14. Precise synaptic efficacy alignment suggests potentiation dominated learning

    Directory of Open Access Journals (Sweden)

    Christoph eHartmann

    2016-01-01

    Full Text Available Recent evidence suggests that parallel synapses from the same axonal branch onto the same dendritic branch have almost identical strength. It has been proposed that this alignment is only possible through learning rules that integrate activity over long time spans. However, learning mechanisms such as spike-timing-dependent plasticity (STDP are commonly assumed to be temporally local. Here, we propose that the combination of temporally local STDP and a multiplicative synaptic normalization mechanism is sufficient to explain the alignment of parallel synapses.To address this issue, we introduce three increasingly complex models: First, we model the idealized interaction of STDP and synaptic normalization in a single neuron as a simple stochastic process and derive analytically that the alignment effect can be described by a so-called Kesten process. From this we can derive that synaptic efficacy alignment requires potentiation-dominated learning regimes. We verify these conditions in a single-neuron model with independent spiking activities but more realistic synapses. As expected, we only observe synaptic efficacy alignment for long-term potentiation-biased STDP. Finally, we explore how well the findings transfer to recurrent neural networks where the learning mechanisms interact with the correlated activity of the network. We find that due to the self-reinforcing correlations in recurrent circuits under STDP, alignment occurs for both long-term potentiation- and depression-biased STDP, because the learning will be potentiation dominated in both cases due to the potentiating events induced by correlated activity. This is in line with recent results demonstrating a dominance of potentiation over depression during waking and normalization during sleep. This leads us to predict that individual spine pairs will be more similar in the morning than they are after sleep depriviation.In conclusion, we show that synaptic normalization in conjunction with

  15. Parallel learning in an autoshaping paradigm.

    Science.gov (United States)

    Naeem, Maliha; White, Norman M

    2016-08-01

    In an autoshaping task, a single conditioned stimulus (CS; lever insertion) was repeatedly followed by the delivery of an unconditioned stimulus (US; food pellet into an adjacent food magazine) irrespective of the rats' behavior. After repeated training trials, some rats responded to the onset of the CS by approaching and pressing the lever (sign-trackers). Lesions of dorsolateral striatum almost completely eliminated responding to the lever CS while facilitating responding to the food magazine (US). Lesions of the dorsomedial striatum attenuated but did not eliminate responding to the lever CS. Lesions of the basolateral or central nucleus of the amygdala had no significant effects on sign-tracking, but combined lesions of the 2 structures impaired sign-tracking by significantly increasing latency to the first lever press without affecting the number of lever presses. Lesions of the dorsal hippocampus had no effect on any of the behavioral measures. The findings suggest that sign-tracking with a single lever insertion as the CS may consist of 2 separate behaviors learned in parallel: An amygdala-mediated conditioned orienting and approach response and a dorsal striatum-mediated instrumental response. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Learning of spatial relationships between observed and imitated actions allows invariant inverse computation in the frontal mirror neuron system.

    Science.gov (United States)

    Oh, Hyuk; Gentili, Rodolphe J; Reggia, James A; Contreras-Vidal, José L

    2011-01-01

    It has been suggested that the human mirror neuron system can facilitate learning by imitation through coupling of observation and action execution. During imitation of observed actions, the functional relationship between and within the inferior frontal cortex, the posterior parietal cortex, and the superior temporal sulcus can be modeled within the internal model framework. The proposed biologically plausible mirror neuron system model extends currently available models by explicitly modeling the intraparietal sulcus and the superior parietal lobule in implementing the function of a frame of reference transformation during imitation. Moreover, the model posits the ventral premotor cortex as performing an inverse computation. The simulations reveal that: i) the transformation system can learn and represent the changes in extrinsic to intrinsic coordinates when an imitator observes a demonstrator; ii) the inverse model of the imitator's frontal mirror neuron system can be trained to provide the motor plans for the imitated actions.

  17. Energy-efficient STDP-based learning circuits with memristor synapses

    Science.gov (United States)

    Wu, Xinyu; Saxena, Vishal; Campbell, Kristy A.

    2014-05-01

    It is now accepted that the traditional von Neumann architecture, with processor and memory separation, is ill suited to process parallel data streams which a mammalian brain can efficiently handle. Moreover, researchers now envision computing architectures which enable cognitive processing of massive amounts of data by identifying spatio-temporal relationships in real-time and solving complex pattern recognition problems. Memristor cross-point arrays, integrated with standard CMOS technology, are expected to result in massively parallel and low-power Neuromorphic computing architectures. Recently, significant progress has been made in spiking neural networks (SNN) which emulate data processing in the cortical brain. These architectures comprise of a dense network of neurons and the synapses formed between the axons and dendrites. Further, unsupervised or supervised competitive learning schemes are being investigated for global training of the network. In contrast to a software implementation, hardware realization of these networks requires massive circuit overhead for addressing and individually updating network weights. Instead, we employ bio-inspired learning rules such as the spike-timing-dependent plasticity (STDP) to efficiently update the network weights locally. To realize SNNs on a chip, we propose to use densely integrating mixed-signal integrate-andfire neurons (IFNs) and cross-point arrays of memristors in back-end-of-the-line (BEOL) of CMOS chips. Novel IFN circuits have been designed to drive memristive synapses in parallel while maintaining overall power efficiency (<1 pJ/spike/synapse), even at spike rate greater than 10 MHz. We present circuit design details and simulation results of the IFN with memristor synapses, its response to incoming spike trains and STDP learning characterization.

  18. Bayesian Network Constraint-Based Structure Learning Algorithms: Parallel and Optimized Implementations in the bnlearn R Package

    Directory of Open Access Journals (Sweden)

    Marco Scutari

    2017-03-01

    Full Text Available It is well known in the literature that the problem of learning the structure of Bayesian networks is very hard to tackle: Its computational complexity is super-exponential in the number of nodes in the worst case and polynomial in most real-world scenarios. Efficient implementations of score-based structure learning benefit from past and current research in optimization theory, which can be adapted to the task by using the network score as the objective function to maximize. This is not true for approaches based on conditional independence tests, called constraint-based learning algorithms. The only optimization in widespread use, backtracking, leverages the symmetries implied by the definitions of neighborhood and Markov blanket. In this paper we illustrate how backtracking is implemented in recent versions of the bnlearn R package, and how it degrades the stability of Bayesian network structure learning for little gain in terms of speed. As an alternative, we describe a software architecture and framework that can be used to parallelize constraint-based structure learning algorithms (also implemented in bnlearn and we demonstrate its performance using four reference networks and two real-world data sets from genetics and systems biology. We show that on modern multi-core or multiprocessor hardware parallel implementations are preferable over backtracking, which was developed when single-processor machines were the norm.

  19. Mesmerising mirror neurons.

    Science.gov (United States)

    Heyes, Cecilia

    2010-06-01

    Mirror neurons have been hailed as the key to understanding social cognition. I argue that three currents of thought-relating to evolution, atomism and telepathy-have magnified the perceived importance of mirror neurons. When they are understood to be a product of associative learning, rather than an adaptation for social cognition, mirror neurons are no longer mesmerising, but they continue to raise important questions about both the psychology of science and the neural bases of social cognition. Copyright 2010 Elsevier Inc. All rights reserved.

  20. Parallel processing and learning in simple systems. Final report, 10 January 1986-14 January 1989

    Energy Technology Data Exchange (ETDEWEB)

    Mpitsos, G.J.

    1989-03-15

    Work over the three-year tenure of this grant has dealt with interrelated studies of (1) neuropharmacology, (2) behavior, and (3) distributed/parallel processing in the generation of variable motor patterns in the buccal-oral system of the sea slug Pleurobranchaea californica. (4) Computer simulations of simple neutral networks have been undertaken to examine neurointegrative principles that could not be examined in biological preparations. The simulation work has set the basis for further simulations dealing with networks having characteristics relating to real neurons. All of the work has had the goal of developing interdisciplinary tools for understanding the scale-independent problem of how individuals, each possessing only local knowledge of group activity, act within a group to produce different and variable adaptive outputs, and, in turn, of how the group influences the activity of the individual. The pharmacologic studies have had the goal of developing biochemical tools with which to identify groups of neurons that perform specific tasks during the production of a given behavior but are multifunctional by being critically involved in generating several different behaviors.

  1. Learning of Spatial Relationships between Observed and Imitated Actions allows Invariant Inverse Computation in the Frontal Mirror Neuron System

    Science.gov (United States)

    Oh, Hyuk; Gentili, Rodolphe J.; Reggia, James A.; Contreras-Vidal, José L.

    2014-01-01

    It has been suggested that the human mirror neuron system can facilitate learning by imitation through coupling of observation and action execution. During imitation of observed actions, the functional relationship between and within the inferior frontal cortex, the posterior parietal cortex, and the superior temporal sulcus can be modeled within the internal model framework. The proposed biologically plausible mirror neuron system model extends currently available models by explicitly modeling the intraparietal sulcus and the superior parietal lobule in implementing the function of a frame of reference transformation during imitation. Moreover, the model posits the ventral premotor cortex as performing an inverse computation. The simulations reveal that: i) the transformation system can learn and represent the changes in extrinsic to intrinsic coordinates when an imitator observes a demonstrator; ii) the inverse model of the imitator’s frontal mirror neuron system can be trained to provide the motor plans for the imitated actions. PMID:22255261

  2. Where do mirror neurons come from?

    Science.gov (United States)

    Heyes, Cecilia

    2010-03-01

    Debates about the evolution of the 'mirror neuron system' imply that it is an adaptation for action understanding. Alternatively, mirror neurons may be a byproduct of associative learning. Here I argue that the adaptation and associative hypotheses both offer plausible accounts of the origin of mirror neurons, but the associative hypothesis has three advantages. First, it provides a straightforward, testable explanation for the differences between monkeys and humans that have led some researchers to question the existence of a mirror neuron system. Second, it is consistent with emerging evidence that mirror neurons contribute to a range of social cognitive functions, but do not play a dominant, specialised role in action understanding. Finally, the associative hypothesis is supported by recent data showing that, even in adulthood, the mirror neuron system can be transformed by sensorimotor learning. The associative account implies that mirror neurons come from sensorimotor experience, and that much of this experience is obtained through interaction with others. Therefore, if the associative account is correct, the mirror neuron system is a product, as well as a process, of social interaction. (c) 2009 Elsevier Ltd. All rights reserved.

  3. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  4. An FPGA-based silicon neuronal network with selectable excitability silicon neurons

    Directory of Open Access Journals (Sweden)

    Jing eLi

    2012-12-01

    Full Text Available This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN and the transmitter release based silicon synapse, allow the network to show rich dynamic behaviors and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with $256$ full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs.

  5. Neurons in primary motor cortex engaged during action observation.

    Science.gov (United States)

    Dushanova, Juliana; Donoghue, John

    2010-01-01

    Neurons in higher cortical areas appear to become active during action observation, either by mirroring observed actions (termed mirror neurons) or by eliciting mental rehearsal of observed motor acts. We report the existence of neurons in the primary motor cortex (M1), an area that is generally considered to initiate and guide movement performance, responding to viewed actions. Multielectrode recordings in monkeys performing or observing a well-learned step-tracking task showed that approximately half of the M1 neurons that were active when monkeys performed the task were also active when they observed the action being performed by a human. These 'view' neurons were spatially intermingled with 'do' neurons, which are active only during movement performance. Simultaneously recorded 'view' neurons comprised two groups: approximately 38% retained the same preferred direction (PD) and timing during performance and viewing, and the remainder (62%) changed their PDs and time lag during viewing as compared with performance. Nevertheless, population activity during viewing was sufficient to predict the direction and trajectory of viewed movements as action unfolded, although less accurately than during performance. 'View' neurons became less active and contained poorer representations of action when only subcomponents of the task were being viewed. M1 'view' neurons thus appear to reflect aspects of a learned movement when observed in others, and form part of a broadly engaged set of cortical areas routinely responding to learned behaviors. These findings suggest that viewing a learned action elicits replay of aspects of M1 activity needed to perform the observed action, and could additionally reflect processing related to understanding, learning or mentally rehearsing action.

  6. Learning to control a brain-machine interface for reaching and grasping by primates.

    Directory of Open Access Journals (Sweden)

    Jose M Carmena

    2003-11-01

    Full Text Available Reaching and grasping in primates depend on the coordination of neural activity in large frontoparietal ensembles. Here we demonstrate that primates can learn to reach and grasp virtual objects by controlling a robot arm through a closed-loop brain-machine interface (BMIc that uses multiple mathematical models to extract several motor parameters (i.e., hand position, velocity, gripping force, and the EMGs of multiple arm muscles from the electrical activity of frontoparietal neuronal ensembles. As single neurons typically contribute to the encoding of several motor parameters, we observed that high BMIc accuracy required recording from large neuronal ensembles. Continuous BMIc operation by monkeys led to significant improvements in both model predictions and behavioral performance. Using visual feedback, monkeys succeeded in producing robot reach-and-grasp movements even when their arms did not move. Learning to operate the BMIc was paralleled by functional reorganization in multiple cortical areas, suggesting that the dynamic properties of the BMIc were incorporated into motor and sensory cortical representations.

  7. New technologies for examining neuronal ensembles in drug addiction and fear

    Science.gov (United States)

    Cruz, Fabio C.; Koya, Eisuke; Guez-Barber, Danielle H.; Bossert, Jennifer M.; Lupica, Carl R.; Shaham, Yavin; Hope, Bruce T.

    2015-01-01

    Correlational data suggest that learned associations are encoded within neuronal ensembles. However, it has been difficult to prove that neuronal ensembles mediate learned behaviours because traditional pharmacological and lesion methods, and even newer cell type-specific methods, affect both activated and non-activated neurons. Additionally, previous studies on synaptic and molecular alterations induced by learning did not distinguish between behaviourally activated and non-activated neurons. Here, we describe three new approaches—Daun02 inactivation, FACS sorting of activated neurons and c-fos-GFP transgenic rats — that have been used to selectively target and study activated neuronal ensembles in models of conditioned drug effects and relapse. We also describe two new tools — c-fos-tTA mice and inactivation of CREB-overexpressing neurons — that have been used to study the role of neuronal ensembles in conditioned fear. PMID:24088811

  8. Temporal sequence learning in winner-take-all networks of spiking neurons demonstrated in a brain-based device.

    Science.gov (United States)

    McKinstry, Jeffrey L; Edelman, Gerald M

    2013-01-01

    Animal behavior often involves a temporally ordered sequence of actions learned from experience. Here we describe simulations of interconnected networks of spiking neurons that learn to generate patterns of activity in correct temporal order. The simulation consists of large-scale networks of thousands of excitatory and inhibitory neurons that exhibit short-term synaptic plasticity and spike-timing dependent synaptic plasticity. The neural architecture within each area is arranged to evoke winner-take-all (WTA) patterns of neural activity that persist for tens of milliseconds. In order to generate and switch between consecutive firing patterns in correct temporal order, a reentrant exchange of signals between these areas was necessary. To demonstrate the capacity of this arrangement, we used the simulation to train a brain-based device responding to visual input by autonomously generating temporal sequences of motor actions.

  9. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Scalable, incremental learning with MapReduce parallelization for cell detection in high-resolution 3D microscopy data

    KAUST Repository

    Sung, Chul

    2013-08-01

    Accurate estimation of neuronal count and distribution is central to the understanding of the organization and layout of cortical maps in the brain, and changes in the cell population induced by brain disorders. High-throughput 3D microscopy techniques such as Knife-Edge Scanning Microscopy (KESM) are enabling whole-brain survey of neuronal distributions. Data from such techniques pose serious challenges to quantitative analysis due to the massive, growing, and sparsely labeled nature of the data. In this paper, we present a scalable, incremental learning algorithm for cell body detection that can address these issues. Our algorithm is computationally efficient (linear mapping, non-iterative) and does not require retraining (unlike gradient-based approaches) or retention of old raw data (unlike instance-based learning). We tested our algorithm on our rat brain Nissl data set, showing superior performance compared to an artificial neural network-based benchmark, and also demonstrated robust performance in a scenario where the data set is rapidly growing in size. Our algorithm is also highly parallelizable due to its incremental nature, and we demonstrated this empirically using a MapReduce-based implementation of the algorithm. We expect our scalable, incremental learning approach to be widely applicable to medical imaging domains where there is a constant flux of new data. © 2013 IEEE.

  11. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  12. Learning-Induced Gene Expression in the Hippocampus Reveals a Role of Neuron -Astrocyte Metabolic Coupling in Long Term Memory

    KAUST Repository

    Tadi, Monika; Allaman, Igor; Lengacher, Sylvain; Grenningloh, Gabriele; Magistretti, Pierre J.

    2015-01-01

    We examined the expression of genes related to brain energy metabolism and particularly those encoding glia (astrocyte)-specific functions in the dorsal hippocampus subsequent to learning. Context-dependent avoidance behavior was tested in mice using the step-through Inhibitory Avoidance (IA) paradigm. Animals were sacrificed 3, 9, 24, or 72 hours after training or 3 hours after retention testing. The quantitative determination of mRNA levels revealed learning-induced changes in the expression of genes thought to be involved in astrocyte-neuron metabolic coupling in a time dependent manner. Twenty four hours following IA training, an enhanced gene expression was seen, particularly for genes encoding monocarboxylate transporters 1 and 4 (MCT1, MCT4), alpha2 subunit of the Na/K-ATPase and glucose transporter type 1. To assess the functional role for one of these genes in learning, we studied MCT1 deficient mice and found that they exhibit impaired memory in the inhibitory avoidance task. Together, these observations indicate that neuron-glia metabolic coupling undergoes metabolic adaptations following learning as indicated by the change in expression of key metabolic genes.

  13. Learning-Induced Gene Expression in the Hippocampus Reveals a Role of Neuron -Astrocyte Metabolic Coupling in Long Term Memory

    KAUST Repository

    Tadi, Monika

    2015-10-29

    We examined the expression of genes related to brain energy metabolism and particularly those encoding glia (astrocyte)-specific functions in the dorsal hippocampus subsequent to learning. Context-dependent avoidance behavior was tested in mice using the step-through Inhibitory Avoidance (IA) paradigm. Animals were sacrificed 3, 9, 24, or 72 hours after training or 3 hours after retention testing. The quantitative determination of mRNA levels revealed learning-induced changes in the expression of genes thought to be involved in astrocyte-neuron metabolic coupling in a time dependent manner. Twenty four hours following IA training, an enhanced gene expression was seen, particularly for genes encoding monocarboxylate transporters 1 and 4 (MCT1, MCT4), alpha2 subunit of the Na/K-ATPase and glucose transporter type 1. To assess the functional role for one of these genes in learning, we studied MCT1 deficient mice and found that they exhibit impaired memory in the inhibitory avoidance task. Together, these observations indicate that neuron-glia metabolic coupling undergoes metabolic adaptations following learning as indicated by the change in expression of key metabolic genes.

  14. Age-dependent loss of cholinergic neurons in learning and memory-related brain regions and impaired learning in SAMP8 mice with trigeminal nerve damage

    Institute of Scientific and Technical Information of China (English)

    Yifan He; Jihong Zhu; Fang Huang; Liu Qin; Wenguo Fan; Hongwen He

    2014-01-01

    The tooth belongs to the trigeminal sensory pathway. Dental damage has been associated with impairments in the central nervous system that may be mediated by injury to the trigeminal nerve. In the present study, we investigated the effects of damage to the inferior alveolar nerve, an important peripheral nerve in the trigeminal sensory pathway, on learning and memory be-haviors and structural changes in related brain regions, in a mouse model of Alzheimer’s disease. Inferior alveolar nerve transection or sham surgery was performed in middle-aged (4-month-old) or elderly (7-month-old) senescence-accelerated mouse prone 8 (SAMP8) mice. When the middle-aged mice reached 8 months (middle-aged group 1) or 11 months (middle-aged group 2), and the elderly group reached 11 months, step-down passive avoidance and Y-maze tests of learn-ing and memory were performed, and the cholinergic system was examined in the hippocampus (Nissl staining and acetylcholinesterase histochemistry) and basal forebrain (choline acetyltrans-ferase immunohistochemistry). In the elderly group, animals that underwent nerve transection had fewer pyramidal neurons in the hippocampal CA1 and CA3 regions, fewer cholinergic ifbers in the CA1 and dentate gyrus, and fewer cholinergic neurons in the medial septal nucleus and vertical limb of the diagonal band, compared with sham-operated animals, as well as showing impairments in learning and memory. Conversely, no signiifcant differences in histology or be-havior were observed between middle-aged group 1 or group 2 transected mice and age-matched sham-operated mice. The present ifndings suggest that trigeminal nerve damage in old age, but not middle age, can induce degeneration of the septal-hippocampal cholinergic system and loss of hippocampal pyramidal neurons, and ultimately impair learning ability. Our results highlight the importance of active treatment of trigeminal nerve damage in elderly patients and those with Alzheimer’s disease, and

  15. An Energy-Efficient and Scalable Deep Learning/Inference Processor With Tetra-Parallel MIMD Architecture for Big Data Applications.

    Science.gov (United States)

    Park, Seong-Wook; Park, Junyoung; Bong, Kyeongryeol; Shin, Dongjoo; Lee, Jinmook; Choi, Sungpill; Yoo, Hoi-Jun

    2015-12-01

    Deep Learning algorithm is widely used for various pattern recognition applications such as text recognition, object recognition and action recognition because of its best-in-class recognition accuracy compared to hand-crafted algorithm and shallow learning based algorithms. Long learning time caused by its complex structure, however, limits its usage only in high-cost servers or many-core GPU platforms so far. On the other hand, the demand on customized pattern recognition within personal devices will grow gradually as more deep learning applications will be developed. This paper presents a SoC implementation to enable deep learning applications to run with low cost platforms such as mobile or portable devices. Different from conventional works which have adopted massively-parallel architecture, this work adopts task-flexible architecture and exploits multiple parallelism to cover complex functions of convolutional deep belief network which is one of popular deep learning/inference algorithms. In this paper, we implement the most energy-efficient deep learning and inference processor for wearable system. The implemented 2.5 mm × 4.0 mm deep learning/inference processor is fabricated using 65 nm 8-metal CMOS technology for a battery-powered platform with real-time deep inference and deep learning operation. It consumes 185 mW average power, and 213.1 mW peak power at 200 MHz operating frequency and 1.2 V supply voltage. It achieves 411.3 GOPS peak performance and 1.93 TOPS/W energy efficiency, which is 2.07× higher than the state-of-the-art.

  16. Parallel Computing for Brain Simulation.

    Science.gov (United States)

    Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A

    2017-01-01

    The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. Striatal and Tegmental Neurons Code Critical Signals for Temporal-Difference Learning of State Value in Domestic Chicks

    Directory of Open Access Journals (Sweden)

    Chentao Wen

    2016-11-01

    Full Text Available To ensure survival, animals must update the internal representations of their environment in a trial-and-error fashion. Psychological studies of associative learning and neurophysiological analyses of dopaminergic neurons have suggested that this updating process involves the temporal-difference (TD method in the basal ganglia network. However, the way in which the component variables of the TD method are implemented at the neuronal level is unclear. To investigate the underlying neural mechanisms, we trained domestic chicks to associate color cues with food rewards. We recorded neuronal activities from the medial striatum or tegmentum in a freely behaving condition and examined how reward omission changed neuronal firing. To compare neuronal activities with the signals assumed in the TD method, we simulated the behavioral task in the form of a finite sequence composed of discrete steps of time. The three signals assumed in the simulated task were the prediction signal, the target signal for updating, and the TD-error signal. In both the medial striatum and tegmentum, the majority of recorded neurons were categorized into three types according to their fitness for three models, though these neurons tended to form a continuum spectrum without distinct differences in the firing rate. Specifically, two types of striatal neurons successfully mimicked the target signal and the prediction signal. A linear summation of these two types of striatum neurons was a good fit for the activity of one type of tegmental neurons mimicking the TD-error signal. The present study thus demonstrates that the striatum and tegmentum can convey the signals critically required for the TD method. Based on the theoretical and neurophysiological studies, together with tract-tracing data, we propose a novel model to explain how the convergence of signals represented in the striatum could lead to the computation of TD error in tegmental dopaminergic neurons.

  18. The mirror neuron system.

    Science.gov (United States)

    Cattaneo, Luigi; Rizzolatti, Giacomo

    2009-05-01

    Mirror neurons are a class of neurons, originally discovered in the premotor cortex of monkeys, that discharge both when individuals perform a given motor act and when they observe others perform that same motor act. Ample evidence demonstrates the existence of a cortical network with the properties of mirror neurons (mirror system) in humans. The human mirror system is involved in understanding others' actions and their intentions behind them, and it underlies mechanisms of observational learning. Herein, we will discuss the clinical implications of the mirror system.

  19. The role of mirror neurons in language acquisition and evolution.

    Science.gov (United States)

    Behme, Christina

    2014-04-01

    I argue that Cook et al.'s attack of the genetic hypothesis of mirror neurons misses its target because the authors miss the point that genetics may specify how neurons may learn, not what they learn. Paying more attention to recent work linking mirror neurons to language acquisition and evolution would strengthen Cook et al.'s arguments against a rigid genetic hypothesis.

  20. Learning-Induced Gene Expression in the Hippocampus Reveals a Role of Neuron -Astrocyte Metabolic Coupling in Long Term Memory.

    Directory of Open Access Journals (Sweden)

    Monika Tadi

    Full Text Available We examined the expression of genes related to brain energy metabolism and particularly those encoding glia (astrocyte-specific functions in the dorsal hippocampus subsequent to learning. Context-dependent avoidance behavior was tested in mice using the step-through Inhibitory Avoidance (IA paradigm. Animals were sacrificed 3, 9, 24, or 72 hours after training or 3 hours after retention testing. The quantitative determination of mRNA levels revealed learning-induced changes in the expression of genes thought to be involved in astrocyte-neuron metabolic coupling in a time dependent manner. Twenty four hours following IA training, an enhanced gene expression was seen, particularly for genes encoding monocarboxylate transporters 1 and 4 (MCT1, MCT4, alpha2 subunit of the Na/K-ATPase and glucose transporter type 1. To assess the functional role for one of these genes in learning, we studied MCT1 deficient mice and found that they exhibit impaired memory in the inhibitory avoidance task. Together, these observations indicate that neuron-glia metabolic coupling undergoes metabolic adaptations following learning as indicated by the change in expression of key metabolic genes.

  1. Development of rat telencephalic neurons after prenatal x-irradiation

    International Nuclear Information System (INIS)

    Norton, S.

    1979-01-01

    Telencephalic neurons of rats, irradiated at day 15 of gestation with 125 R, develop synaptic connections on dendrites during maturation which appear to be normal spines in Golgi-stained light microscope preparations. At six weeks of postnatal age both control and irradiated rats have spiny dendritic processes on cortical pyramidal cells and caudate Golgi type II neurons. However, when the rats are 6 months old the irradiated rats have more neurons with beaded dendritic processes that lack spines or neurons and are likely to be degenerating neurons. The apparently normal development of the neurons followed by degeneration in the irradiated rat has a parallel in previous reports of the delayed hyperactivity which develops in rats irradiated on the fifteenth gestational day

  2. Neuron class-specific requirements for Fragile X Mental Retardation Protein in critical period development of calcium signaling in learning and memory circuitry.

    Science.gov (United States)

    Doll, Caleb A; Broadie, Kendal

    2016-05-01

    Neural circuit optimization occurs through sensory activity-dependent mechanisms that refine synaptic connectivity and information processing during early-use developmental critical periods. Fragile X Mental Retardation Protein (FMRP), the gene product lost in Fragile X syndrome (FXS), acts as an activity sensor during critical period development, both as an RNA-binding translation regulator and channel-binding excitability regulator. Here, we employ a Drosophila FXS disease model to assay calcium signaling dynamics with a targeted transgenic GCaMP reporter during critical period development of the mushroom body (MB) learning/memory circuit. We find FMRP regulates depolarization-induced calcium signaling in a neuron-specific manner within this circuit, suppressing activity-dependent calcium transients in excitatory cholinergic MB input projection neurons and enhancing calcium signals in inhibitory GABAergic MB output neurons. Both changes are restricted to the developmental critical period and rectified at maturity. Importantly, conditional genetic (dfmr1) rescue of null mutants during the critical period corrects calcium signaling defects in both neuron classes, indicating a temporally restricted FMRP requirement. Likewise, conditional dfmr1 knockdown (RNAi) during the critical period replicates constitutive null mutant defects in both neuron classes, confirming cell-autonomous requirements for FMRP in developmental regulation of calcium signaling dynamics. Optogenetic stimulation during the critical period enhances depolarization-induced calcium signaling in both neuron classes, but this developmental change is eliminated in dfmr1 null mutants, indicating the activity-dependent regulation requires FMRP. These results show FMRP shapes neuron class-specific calcium signaling in excitatory vs. inhibitory neurons in developing learning/memory circuitry, and that FMRP mediates activity-dependent regulation of calcium signaling specifically during the early

  3. Two Parallel Pathways Assign Opposing Odor Valences during Drosophila Memory Formation

    Directory of Open Access Journals (Sweden)

    Daisuke Yamazaki

    2018-02-01

    Full Text Available During olfactory associative learning in Drosophila, odors activate specific subsets of intrinsic mushroom body (MB neurons. Coincident exposure to either rewards or punishments is thought to activate extrinsic dopaminergic neurons, which modulate synaptic connections between odor-encoding MB neurons and MB output neurons to alter behaviors. However, here we identify two classes of intrinsic MB γ neurons based on cAMP response element (CRE-dependent expression, γCRE-p and γCRE-n, which encode aversive and appetitive valences. γCRE-p and γCRE-n neurons act antagonistically to maintain neutral valences for neutral odors. Activation or inhibition of either cell type upsets this balance, toggling odor preferences to either positive or negative values. The mushroom body output neurons, MBON-γ5β′2a/β′2mp and MBON-γ2α′1, mediate the actions of γCRE-p and γCRE-n neurons. Our data indicate that MB neurons encode valence information, as well as odor information, and this information is integrated through a process involving MBONs to regulate learning and memory.

  4. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  5. Direct Neuronal Reprogramming for Disease Modeling Studies Using Patient-Derived Neurons: What Have We Learned?

    Directory of Open Access Journals (Sweden)

    Janelle Drouin-Ouellet

    2017-09-01

    Full Text Available Direct neuronal reprogramming, by which a neuron is formed via direct conversion from a somatic cell without going through a pluripotent intermediate stage, allows for the possibility of generating patient-derived neurons. A unique feature of these so-called induced neurons (iNs is the potential to maintain aging and epigenetic signatures of the donor, which is critical given that many diseases of the CNS are age related. Here, we review the published literature on the work that has been undertaken using iNs to model human brain disorders. Furthermore, as disease-modeling studies using this direct neuronal reprogramming approach are becoming more widely adopted, it is important to assess the criteria that are used to characterize the iNs, especially in relation to the extent to which they are mature adult neurons. In particular: i what constitutes an iN cell, ii which stages of conversion offer the earliest/optimal time to assess features that are specific to neurons and/or a disorder and iii whether generating subtype-specific iNs is critical to the disease-related features that iNs express. Finally, we discuss the range of potential biomedical applications that can be explored using patient-specific models of neurological disorders with iNs, and the challenges that will need to be overcome in order to realize these applications.

  6. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  7. Control of neuropeptide expression by parallel activity-dependent pathways in caenorhabditis elegans

    DEFF Research Database (Denmark)

    Rojo Romanos, Teresa; Petersen, Jakob Gramstrup; Pocock, Roger

    2017-01-01

    Monitoring of neuronal activity within circuits facilitates integrated responses and rapid changes in behavior. We have identified a system in Caenorhabditis elegans where neuropeptide expression is dependent on the ability of the BAG neurons to sense carbon dioxide. In C. Elegans, CO 2 sensing...... is predominantly coordinated by the BAG-expressed receptor-type guanylate cyclase GCY-9. GCY-9 binding to CO 2 causes accumulation of cyclic GMP and opening of the cGMP-gated TAX-2/TAX-4 cation channels; provoking an integrated downstream cascade that enables C. Elegans to avoid high CO 2. Here we show that c...... that expression of flp-19::GFP is controlled in parallel to GCY-9 by the activity-dependent transcription factor CREB (CRH-1) and the cAMP-dependent protein kinase (KIN-2) signaling pathway. We therefore show that two parallel pathways regulate neuropeptide gene expression in the BAG sensory neurons: the ability...

  8. Spontaneous neuronal activity as a self-organized critical phenomenon

    Science.gov (United States)

    de Arcangelis, L.; Herrmann, H. J.

    2013-01-01

    Neuronal avalanches are a novel mode of activity in neuronal networks, experimentally found in vitro and in vivo, and exhibit a robust critical behaviour. Avalanche activity can be modelled within the self-organized criticality framework, including threshold firing, refractory period and activity-dependent synaptic plasticity. The size and duration distributions confirm that the system acts in a critical state, whose scaling behaviour is very robust. Next, we discuss the temporal organization of neuronal avalanches. This is given by the alternation between states of high and low activity, named up and down states, leading to a balance between excitation and inhibition controlled by a single parameter. During these periods both the single neuron state and the network excitability level, keeping memory of past activity, are tuned by homeostatic mechanisms. Finally, we verify if a system with no characteristic response can ever learn in a controlled and reproducible way. Learning in the model occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. Learning is a truly collective process and the learning dynamics exhibits universal features. Even complex rules can be learned provided that the plastic adaptation is sufficiently slow.

  9. ANNarchy: a code generation approach to neural simulations on parallel hardware

    Science.gov (United States)

    Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.

    2015-01-01

    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957

  10. Mirror neurons, procedural learning, and the positive new experience: a developmental systems self psychology approach.

    Science.gov (United States)

    Wolf, N S; Gales, M; Shane, E; Shane, M

    2000-01-01

    In summary, we are impressed with the existence of a mirror neuron system in the prefrontal cortex that serves as part of a complex neural network, including afferent and efferent connections to the limbic system, in particular the amygdala, in addition to the premotor and motor cortex. We think it is possible to arrive at an integration that postulates the mirror neuron system and its many types of associated multimodal neurons as contributing significantly to implicit procedural learning, a process that underlies a range of complex nonconscious, unconscious, preconscious and conscious cognitive activities, from playing musical instruments to character formation and traumatic configurations. This type of brain circuitry may establish an external coherence with developmental systems self psychology which implies that positive new experience is meliorative and that the intentional revival of old-old traumatic relational configurations might enhance maladaptive procedural patterns that would lead to the opposite of the intended beneficial change. When analysts revive traumatic transference patterns for the purpose of clarification and interpretation, they may fail to appreciate that such traumatic transference patterns make interpretation ineffective because, as we have stated above, the patient lacks self-reflection under such traumatic conditions. The continued plasticity and immediacy of the mirror neuron system can contribute to positive new experiences that promote the formation of new, adaptive, implicit-procedural patterns. Perhaps this broadened repertoire in the patient of ways of understanding interrelational events through the psychoanalytic process allows the less adaptive patterns ultimately to become vestigial and the newer, more adaptive patterns to emerge as dominant. Finally, as we have stated, we believe that the intentional transferential revival of trauma (i.e., the old-old relational configuration) may not contribute to therapeutic benefit. In

  11. Separate groups of dopamine neurons innervate caudate head and tail encoding flexible and stable value memories

    Directory of Open Access Journals (Sweden)

    Hyoung F Kim

    2014-10-01

    Full Text Available Dopamine neurons are thought to be critical for reward value-based learning by modifying synaptic transmissions in the striatum. Yet, different regions of the striatum seem to guide different kinds of learning. Do dopamine neurons contribute to the regional differences of the striatum in learning? As a first step to answer this question, we examined whether the head and tail of the caudate nucleus of the monkey (Macaca mulatta receive inputs from the same or different dopamine neurons. We chose these caudate regions because we previously showed that caudate head neurons learn values of visual objects quickly and flexibly, whereas caudate tail neurons learn object values slowly but retain them stably. Here we confirmed the functional difference by recording single neuronal activity while the monkey performed the flexible and stable value tasks, and then injected retrograde tracers in the functional domains of caudate head and tail. The projecting dopaminergic neurons were identified using tyrosine hydroxylase immunohistochemistry. We found that two groups of dopamine neurons in the substantia nigra pars compacta project largely separately to the caudate head and tail. These groups of dopamine neurons were mostly separated topographically: head-projecting neurons were located in the rostral-ventral-medial region, while tail-projecting neurons were located in the caudal-dorsal-lateral regions of the substantia nigra. Furthermore, they showed different morphological features: tail-projecting neurons were larger and less circular than head-projecting neurons. Our data raise the possibility that different groups of dopamine neurons selectively guide learning of flexible (short-term and stable (long-term memories of object values.

  12. A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks.

    Science.gov (United States)

    Zhang, Xu; Foderaro, Greg; Henriquez, Craig; Ferrari, Silvia

    2018-03-01

    Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.

  13. Parallelization of learning problems by artificial neural networks. Application in external radiotherapy

    International Nuclear Information System (INIS)

    Sauget, M.

    2007-12-01

    This research is about the application of neural networks used in the external radiotherapy domain. The goal is to elaborate a new evaluating system for the radiation dose distributions in heterogeneous environments. The al objective of this work is to build a complete tool kit to evaluate the optimal treatment planning. My st research point is about the conception of an incremental learning algorithm. The interest of my work is to combine different optimizations specialized in the function interpolation and to propose a new algorithm allowing to change the neural network architecture during the learning phase. This algorithm allows to minimise the al size of the neural network while keeping a good accuracy. The second part of my research is to parallelize the previous incremental learning algorithm. The goal of that work is to increase the speed of the learning step as well as the size of the learned dataset needed in a clinical case. For that, our incremental learning algorithm presents an original data decomposition with overlapping, together with a fault tolerance mechanism. My last research point is about a fast and accurate algorithm computing the radiation dose deposit in any heterogeneous environment. At the present time, the existing solutions used are not optimal. The fast solution are not accurate and do not give an optimal treatment planning. On the other hand, the accurate solutions are far too slow to be used in a clinical context. Our algorithm answers to this problem by bringing rapidity and accuracy. The concept is to use a neural network adequately learned together with a mechanism taking into account the environment changes. The advantages of this algorithm is to avoid the use of a complex physical code while keeping a good accuracy and reasonable computation times. (author)

  14. Unpacking the cognitive map: the parallel map theory of hippocampal function.

    Science.gov (United States)

    Jacobs, Lucia F; Schenk, Françoise

    2003-04-01

    In the parallel map theory, the hippocampus encodes space with 2 mapping systems. The bearing map is constructed primarily in the dentate gyrus from directional cues such as stimulus gradients. The sketch map is constructed within the hippocampus proper from positional cues. The integrated map emerges when data from the bearing and sketch maps are combined. Because the component maps work in parallel, the impairment of one can reveal residual learning by the other. Such parallel function may explain paradoxes of spatial learning, such as learning after partial hippocampal lesions, taxonomic and sex differences in spatial learning, and the function of hippocampal neurogenesis. By integrating evidence from physiology to phylogeny, the parallel map theory offers a unified explanation for hippocampal function.

  15. Information maximization explains the emergence of complex cell-like neurons

    Directory of Open Access Journals (Sweden)

    Takuma eTanaka

    2013-11-01

    Full Text Available We propose models and a method to qualitatively explain the receptive field properties of complex cells in the primary visual cortex. We apply a learning method based on the information maximization principle in a feedforward network, which comprises an input layer of image patches, simple cell-like first-output-layer neurons, and second-output-layer neurons (Model 1. The information maximization results in the emergence of the complex cell-like receptive field properties in the second-output-layer neurons. After learning, second-output-layer neurons receive connection weights having the same size from two first-output-layer neurons with sign-inverted receptive fields. The second-output-layer neurons replicate the phase invariance and iso-orientation suppression. Furthermore, on the basis of these results, we examine a simplified model showing the emergence of complex cell-like receptive fields (Model 2. We show that after learning, the output neurons of this model exhibit iso-orientation suppression, cross-orientation facilitation, and end stopping, which are similar to those found in complex cells. These properties of model neurons suggest that complex cells in the primary visual cortex become selective to features composed of edges to increase the variability of the output.

  16. Neuron-glia metabolic coupling and plasticity.

    Science.gov (United States)

    Magistretti, Pierre J

    2011-04-01

    The focus of the current research projects in my laboratory revolves around the question of metabolic plasticity of neuron-glia coupling. Our hypothesis is that behavioural conditions, such as for example learning or the sleep-wake cycle, in which synaptic plasticity is well documented, or during specific pathological conditions, are accompanied by changes in the regulation of energy metabolism of astrocytes. We have indeed observed that the 'metabolic profile' of astrocytes is modified during the sleep-wake cycle and during conditions mimicking neuroinflammation in the presence or absence of amyloid-β. The effect of amyloid-β on energy metabolism is dependent on its state of aggregation and on internalization of the peptide by astrocytes. Distinct patterns of metabolic activity could be observed during the learning and recall phases in a spatial learning task. Gene expression analysis in activated areas, notably hippocampous and retrosplenial cortex, demonstrated that the expression levels of several genes implicated in astrocyte-neuron metabolic coupling are enhanced by learning. Regarding metabolic plasticity during the sleep-wake cycle, we have observed that the level of expression of a panel of selected genes, which we know are key for neuron-glia metabolic coupling, is modulated by sleep deprivation.

  17. New technologies for examining the role of neuronal ensembles in drug addiction and fear.

    Science.gov (United States)

    Cruz, Fabio C; Koya, Eisuke; Guez-Barber, Danielle H; Bossert, Jennifer M; Lupica, Carl R; Shaham, Yavin; Hope, Bruce T

    2013-11-01

    Correlational data suggest that learned associations are encoded within neuronal ensembles. However, it has been difficult to prove that neuronal ensembles mediate learned behaviours because traditional pharmacological and lesion methods, and even newer cell type-specific methods, affect both activated and non-activated neurons. In addition, previous studies on synaptic and molecular alterations induced by learning did not distinguish between behaviourally activated and non-activated neurons. Here, we describe three new approaches--Daun02 inactivation, FACS sorting of activated neurons and Fos-GFP transgenic rats--that have been used to selectively target and study activated neuronal ensembles in models of conditioned drug effects and relapse. We also describe two new tools--Fos-tTA transgenic mice and inactivation of CREB-overexpressing neurons--that have been used to study the role of neuronal ensembles in conditioned fear.

  18. Whole-Brain Mapping of Neuronal Activity in the Learned Helplessness Model of Depression.

    Science.gov (United States)

    Kim, Yongsoo; Perova, Zinaida; Mirrione, Martine M; Pradhan, Kith; Henn, Fritz A; Shea, Stephen; Osten, Pavel; Li, Bo

    2016-01-01

    Some individuals are resilient, whereas others succumb to despair in repeated stressful situations. The neurobiological mechanisms underlying such divergent behavioral responses remain unclear. Here, we employed an automated method for mapping neuronal activity in search of signatures of stress responses in the entire mouse brain. We used serial two-photon tomography to detect expression of c-FosGFP - a marker of neuronal activation - in c-fosGFP transgenic mice subjected to the learned helplessness (LH) procedure, a widely used model of stress-induced depression-like phenotype in laboratory animals. We found that mice showing "helpless" behavior had an overall brain-wide reduction in the level of neuronal activation compared with mice showing "resilient" behavior, with the exception of a few brain areas, including the locus coeruleus, that were more activated in the helpless mice. In addition, the helpless mice showed a strong trend of having higher similarity in whole-brain activity profile among individuals, suggesting that helplessness is represented by a more stereotypic brain-wide activation pattern. This latter effect was confirmed in rats subjected to the LH procedure, using 2-deoxy-2[18F]fluoro-D-glucose positron emission tomography to assess neural activity. Our findings reveal distinct brain activity markings that correlate with adaptive and maladaptive behavioral responses to stress, and provide a framework for further studies investigating the contribution of specific brain regions to maladaptive stress responses.

  19. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    OpenAIRE

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collecti...

  20. Noise and neuronal populations conspire to encode simple waveforms reliably

    Science.gov (United States)

    Parnas, B. R.

    1996-01-01

    Sensory systems rely on populations of neurons to encode information transduced at the periphery into meaningful patterns of neuronal population activity. This transduction occurs in the presence of intrinsic neuronal noise. This is fortunate. The presence of noise allows more reliable encoding of the temporal structure present in the stimulus than would be possible in a noise-free environment. Simulations with a parallel model of signal processing at the auditory periphery have been used to explore the effects of noise and a neuronal population on the encoding of signal information. The results show that, for a given set of neuronal modeling parameters and stimulus amplitude, there is an optimal amount of noise for stimulus encoding with maximum fidelity.

  1. Noradrenergic control of gene expression and long-term neuronal adaptation evoked by learned vocalizations in songbirds.

    Directory of Open Access Journals (Sweden)

    Tarciso A F Velho

    Full Text Available Norepinephrine (NE is thought to play important roles in the consolidation and retrieval of long-term memories, but its role in the processing and memorization of complex acoustic signals used for vocal communication has yet to be determined. We have used a combination of gene expression analysis, electrophysiological recordings and pharmacological manipulations in zebra finches to examine the role of noradrenergic transmission in the brain's response to birdsong, a learned vocal behavior that shares important features with human speech. We show that noradrenergic transmission is required for both the expression of activity-dependent genes and the long-term maintenance of stimulus-specific electrophysiological adaptation that are induced in central auditory neurons by stimulation with birdsong. Specifically, we show that the caudomedial nidopallium (NCM, an area directly involved in the auditory processing and memorization of birdsong, receives strong noradrenergic innervation. Song-responsive neurons in this area express α-adrenergic receptors and are in close proximity to noradrenergic terminals. We further show that local α-adrenergic antagonism interferes with song-induced gene expression, without affecting spontaneous or evoked electrophysiological activity, thus dissociating the molecular and electrophysiological responses to song. Moreover, α-adrenergic antagonism disrupts the maintenance but not the acquisition of the adapted physiological state. We suggest that the noradrenergic system regulates long-term changes in song-responsive neurons by modulating the gene expression response that is associated with the electrophysiological activation triggered by song. We also suggest that this mechanism may be an important contributor to long-term auditory memories of learned vocalizations.

  2. Learning by statistical cooperation of self-interested neuron-like computing elements.

    Science.gov (United States)

    Barto, A G

    1985-01-01

    Since the usual approaches to cooperative computation in networks of neuron-like computating elements do not assume that network components have any "preferences", they do not make substantive contact with game theoretic concepts, despite their use of some of the same terminology. In the approach presented here, however, each network component, or adaptive element, is a self-interested agent that prefers some inputs over others and "works" toward obtaining the most highly preferred inputs. Here we describe an adaptive element that is robust enough to learn to cooperate with other elements like itself in order to further its self-interests. It is argued that some of the longstanding problems concerning adaptation and learning by networks might be solvable by this form of cooperativity, and computer simulation experiments are described that show how networks of self-interested components that are sufficiently robust can solve rather difficult learning problems. We then place the approach in its proper historical and theoretical perspective through comparison with a number of related algorithms. A secondary aim of this article is to suggest that beyond what is explicitly illustrated here, there is a wealth of ideas from game theory and allied disciplines such as mathematical economics that can be of use in thinking about cooperative computation in both nervous systems and man-made systems.

  3. A Simple Deep Learning Method for Neuronal Spike Sorting

    Science.gov (United States)

    Yang, Kai; Wu, Haifeng; Zeng, Yu

    2017-10-01

    Spike sorting is one of key technique to understand brain activity. With the development of modern electrophysiology technology, some recent multi-electrode technologies have been able to record the activity of thousands of neuronal spikes simultaneously. The spike sorting in this case will increase the computational complexity of conventional sorting algorithms. In this paper, we will focus spike sorting on how to reduce the complexity, and introduce a deep learning algorithm, principal component analysis network (PCANet) to spike sorting. The introduced method starts from a conventional model and establish a Toeplitz matrix. Through the column vectors in the matrix, we trains a PCANet, where some eigenvalue vectors of spikes could be extracted. Finally, support vector machine (SVM) is used to sort spikes. In experiments, we choose two groups of simulated data from public databases availably and compare this introduced method with conventional methods. The results indicate that the introduced method indeed has lower complexity with the same sorting errors as the conventional methods.

  4. Dopaminergic neurons encode a distributed, asymmetric representation of temperature in Drosophila.

    Science.gov (United States)

    Tomchik, Seth M

    2013-01-30

    Dopaminergic circuits modulate a wide variety of innate and learned behaviors in animals, including olfactory associative learning, arousal, and temperature-preference behavior. It is not known whether distinct or overlapping sets of dopaminergic neurons modulate these behaviors. Here, I have functionally characterized the dopaminergic circuits innervating the Drosophila mushroom body with in vivo calcium imaging and conditional silencing of genetically defined subsets of neurons. Distinct subsets of PPL1 dopaminergic neurons innervating the vertical lobes of the mushroom body responded to decreases in temperature, but not increases, with rapidly adapting bursts of activity. PAM neurons innervating the horizontal lobes did not respond to temperature shifts. Ablation of the antennae and maxillary palps reduced, but did not eliminate, the responses. Genetic silencing of dopaminergic neurons innervating the vertical mushroom body lobes substantially reduced behavioral cold avoidance, but silencing smaller subsets of these neurons had no effect. These data demonstrate that overlapping dopaminergic circuits encode a broadly distributed, asymmetric representation of temperature that overlays regions implicated previously in learning, memory, and forgetting. Thus, diverse behaviors engage overlapping sets of dopaminergic neurons that encode multimodal stimuli and innervate a single anatomical target, the mushroom body.

  5. Fitting neuron models to spike trains

    Directory of Open Access Journals (Sweden)

    Cyrille eRossant

    2011-02-01

    Full Text Available Computational modeling is increasingly used to understand the function of neural circuitsin systems neuroscience.These studies require models of individual neurons with realisticinput-output properties.Recently, it was found that spiking models can accurately predict theprecisely timed spike trains produced by cortical neurons in response tosomatically injected currents,if properly fitted. This requires fitting techniques that are efficientand flexible enough to easily test different candidate models.We present a generic solution, based on the Brian simulator(a neural network simulator in Python, which allowsthe user to define and fit arbitrary neuron models to electrophysiological recordings.It relies on vectorization and parallel computing techniques toachieve efficiency.We demonstrate its use on neural recordings in the barrel cortex andin the auditory brainstem, and confirm that simple adaptive spiking modelscan accurately predict the response of cortical neurons. Finally, we show how a complexmulticompartmental model can be reduced to a simple effective spiking model.

  6. Mirror neurons: From origin to function

    OpenAIRE

    Cook, R; Bird, G; Catmur, C; Press, C; Heyes, C

    2014-01-01

    This article argues that mirror neurons originate in sensorimotor associative learning and therefore a new approach is needed to investigate their functions. Mirror neurons were discovered about 20 years ago in the monkey brain, and there is now evidence that they are also present in the human brain. The intriguing feature of many mirror neurons is that they fire not only when the animal is performing an action, such as grasping an object using a power grip, but also when the animal passively...

  7. Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications.

    Science.gov (United States)

    D'Angelo, Gianni; Rampone, Salvatore

    2014-01-01

    The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n(3)) and of O(n(5)) order, respectively, and so, the algorithm is unaffordable for huge data sets. We find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of

  8. Whole-brain mapping of neuronal activity in the learned helplessness model of depression

    Directory of Open Access Journals (Sweden)

    Yongsoo eKim

    2016-02-01

    Full Text Available Some individuals are resilient, whereas others succumb to despair in repeated stressful situations. The neurobiological mechanisms underlying such divergent behavioral responses remain unclear. Here, we employed an automated method for mapping neuronal activity in search of signatures of stress responses in the entire mouse brain. We used serial two-photon tomography to detect expression of c-FosGFP – a marker of neuronal activation – in c-fosGFP transgenic mice subjected to the learned helplessness (LH procedure, a widely used model of stress-induced depression-like phenotype in laboratory animals. We found that mice showing helpless behavior had an overall brain-wide reduction in the level of neuronal activation compared with mice showing resilient behavior, with the exception of a few brain areas, including the locus coeruleus, that were more activated in the helpless mice. In addition, the helpless mice showed a strong trend of having higher similarity in whole brain activity profile among individuals, suggesting that helplessness is represented by a more stereotypic brain-wide activation pattern. This latter effect was confirmed in rats subjected to the LH procedure, using 2-deoxy-2[18F]fluoro-D-glucose positron emission tomography to assess neural activity. Our findings reveal distinct brain activity markings that correlate with adaptive and maladaptive behavioral responses to stress, and provide a framework for further studies investigating the contribution of specific brain regions to maladaptive stress responses.

  9. Machine learning and parallelism in the reconstruction of LHCb and its upgrade

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00260810

    2016-01-01

    The LHCb detector at the LHC is a general purpose detector in the forward region with a focus on reconstructing decays of c- and b-hadrons. For Run II of the LHC, a new trigger strategy with a real-time reconstruction, alignment and calibration was employed. This was made possible by implementing an oine-like track reconstruction in the high level trigger. However, the ever increasing need for a higher throughput and the move to parallelism in the CPU architectures in the last years necessitated the use of vectorization techniques to achieve the desired speed and a more extensive use of machine learning to veto bad events early on. This document discusses selected improvements in computationally expensive parts of the track reconstruction, like the Kalman filter, as well as an improved approach to get rid of fake tracks using fast machine learning techniques. In the last part, a short overview of the track reconstruction challenges for the upgrade of LHCb, is given. Running a fully software-based trigger, a l...

  10. Mirror neuron activation as a function of explicit learning: changes in mu-event-related power after learning novel responses to ideomotor compatible, partially compatible, and non-compatible stimuli.

    Science.gov (United States)

    Behmer, Lawrence P; Fournier, Lisa R

    2016-11-01

    Questions regarding the malleability of the mirror neuron system (MNS) continue to be debated. MNS activation has been reported when people observe another person performing biological goal-directed behaviors, such as grasping a cup. These findings support the importance of mapping goal-directed biological behavior onto one's motor repertoire as a means of understanding the actions of others. Still, other evidence supports the Associative Sequence Learning (ASL) model which predicts that the MNS responds to a variety of stimuli after sensorimotor learning, not simply biological behavior. MNS activity develops as a consequence of developing stimulus-response associations between a stimulus and its motor outcome. Findings from the ideomotor literature indicate that stimuli that are more ideomotor compatible with a response are accompanied by an increase in response activation compared to less compatible stimuli; however, non-compatible stimuli robustly activate a constituent response after sensorimotor learning. Here, we measured changes in the mu-rhythm, an EEG marker thought to index MNS activity, predicting that stimuli that differ along dimensions of ideomotor compatibility should show changes in mirror neuron activation as participants learn the respective stimulus-response associations. We observed robust mu-suppression for ideomotor-compatible hand actions and partially compatible dot animations prior to learning; however, compatible stimuli showed greater mu-suppression than partially or non-compatible stimuli after explicit learning. Additionally, non-compatible abstract stimuli exceeded baseline only after participants explicitly learned the motor responses associated with the stimuli. We conclude that the empirical differences between the biological and ASL accounts of the MNS can be explained by Ideomotor Theory. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Changes in Appetitive Associative Strength Modulates Nucleus Accumbens, But Not Orbitofrontal Cortex Neuronal Ensemble Excitability.

    Science.gov (United States)

    Ziminski, Joseph J; Hessler, Sabine; Margetts-Smith, Gabriella; Sieburg, Meike C; Crombag, Hans S; Koya, Eisuke

    2017-03-22

    Cues that predict the availability of food rewards influence motivational states and elicit food-seeking behaviors. If a cue no longer predicts food availability, then animals may adapt accordingly by inhibiting food-seeking responses. Sparsely activated sets of neurons, coined "neuronal ensembles," have been shown to encode the strength of reward-cue associations. Although alterations in intrinsic excitability have been shown to underlie many learning and memory processes, little is known about these properties specifically on cue-activated neuronal ensembles. We examined the activation patterns of cue-activated orbitofrontal cortex (OFC) and nucleus accumbens (NAc) shell ensembles using wild-type and Fos-GFP mice, which express green fluorescent protein (GFP) in activated neurons, after appetitive conditioning with sucrose and extinction learning. We also investigated the neuronal excitability of recently activated, GFP+ neurons in these brain areas using whole-cell electrophysiology in brain slices. Exposure to a sucrose cue elicited activation of neurons in both the NAc shell and OFC. In the NAc shell, but not the OFC, these activated GFP+ neurons were more excitable than surrounding GFP- neurons. After extinction, the number of neurons activated in both areas was reduced and activated ensembles in neither area exhibited altered excitability. These data suggest that learning-induced alterations in the intrinsic excitability of neuronal ensembles is regulated dynamically across different brain areas. Furthermore, we show that changes in associative strength modulate the excitability profile of activated ensembles in the NAc shell. SIGNIFICANCE STATEMENT Sparsely distributed sets of neurons called "neuronal ensembles" encode learned associations about food and cues predictive of its availability. Widespread changes in neuronal excitability have been observed in limbic brain areas after associative learning, but little is known about the excitability changes that

  12. Neuromorphic function learning with carbon nanotube based synapses

    International Nuclear Information System (INIS)

    Gacem, Karim; Filoramo, Arianna; Derycke, Vincent; Retrouvey, Jean-Marie; Chabi, Djaafar; Zhao, Weisheng; Klein, Jacques-Olivier

    2013-01-01

    The principle of using nanoscale memory devices as artificial synapses in neuromorphic circuits is recognized as a promising way to build ground-breaking circuit architectures tolerant to defects and variability. Yet, actual experimental demonstrations of the neural network type of circuits based on non-conventional/non-CMOS memory devices and displaying function learning capabilities remain very scarce. We show here that carbon-nanotube-based memory elements can be used as artificial synapses, combined with conventional neurons and trained to perform functions through the application of a supervised learning algorithm. The same ensemble of eight devices can notably be trained multiple times to code successively any three-input linearly separable Boolean logic function despite device-to-device variability. This work thus represents one of the very few demonstrations of actual function learning with synapses based on nanoscale building blocks. The potential of such an approach for the parallel learning of multiple and more complex functions is also evaluated. (paper)

  13. Sensorimotor learning and the ontogeny of the mirror neuron system.

    Science.gov (United States)

    Catmur, Caroline

    2013-04-12

    Mirror neurons, which have now been found in the human and songbird as well as the macaque, respond to both the observation and the performance of the same action. It has been suggested that their matching response properties have evolved as an adaptation for action understanding; alternatively, these properties may arise through sensorimotor experience. Here I review mirror neuron response characteristics from the perspective of ontogeny; I discuss the limited evidence for mirror neurons in early development; and I describe the growing body of evidence suggesting that mirror neuron responses can be modified through experience, and that sensorimotor experience is the critical type of experience for producing mirror neuron responses. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Memristors Empower Spiking Neurons With Stochasticity

    KAUST Repository

    Al-Shedivat, Maruan

    2015-06-01

    Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning algorithms. However, such systems must have two crucial features: 1) the neurons should follow a specific behavioral model, and 2) stochastic spiking should be implemented efficiently for it to be scalable. This paper proposes a memristor-based stochastically spiking neuron that fulfills these requirements. First, the analytical model of the memristor is enhanced so it can capture the behavioral stochasticity consistent with experimentally observed phenomena. The switching behavior of the memristor model is demonstrated to be akin to the firing of the stochastic spike response neuron model, the primary building block for probabilistic algorithms in spiking neural networks. Furthermore, the paper proposes a neural soma circuit that utilizes the intrinsic nondeterminism of memristive switching for efficient spike generation. The simulations and analysis of the behavior of a single stochastic neuron and a winner-take-all network built of such neurons and trained on handwritten digits confirm that the circuit can be used for building probabilistic sampling and pattern adaptation machinery in spiking networks. The findings constitute an important step towards scalable and efficient probabilistic neuromorphic platforms. © 2011 IEEE.

  15. Neuronal Nitric-Oxide Synthase Deficiency Impairs the Long-Term Memory of Olfactory Fear Learning and Increases Odor Generalization

    Science.gov (United States)

    Pavesi, Eloisa; Heldt, Scott A.; Fletcher, Max L.

    2013-01-01

    Experience-induced changes associated with odor learning are mediated by a number of signaling molecules, including nitric oxide (NO), which is predominantly synthesized by neuronal nitric oxide synthase (nNOS) in the brain. In the current study, we investigated the role of nNOS in the acquisition and retention of conditioned olfactory fear. Mice…

  16. Neuroprotection, learning and memory improvement of a standardized extract from Renshen Shouwu against neuronal injury and vascular dementia in rats with brain ischemia.

    Science.gov (United States)

    Wan, Li; Cheng, Yufang; Luo, Zhanyuan; Guo, Haibiao; Zhao, Wenjing; Gu, Quanlin; Yang, Xu; Xu, Jiangping; Bei, Weijian; Guo, Jiao

    2015-05-13

    The Renshen Shouwu capsule (RSSW) is a patented Traditional Chinese Medicine (TCM), that has been proven to improve memory and is widely used in China to apoplexy syndrome and memory deficits. To investigate the neuroprotective and therapeutic effect of the Renshen Shouwu standardized extract (RSSW) on ischemic brain neuronal injury and impairment of learning and memory related to Vascular Dementia (VD) induced by a focal and global cerebral ischemia-reperfusion injury in rats. Using in vivo rat models of both focal ischemia/reperfusion (I/R) injuries induced by a middle cerebral artery occlusion (MCAO), and VD with transient global brain I/R neuronal injuries induced by a four-vessel occlusion (4-VO) in Sprague-Dawley (SD) rats, RSSW (50,100, and 200 mg kg(-1) body weights) and Egb761® (80 mg kg(-1)) were administered orally for 20 days (preventively 6 days+therapeutically 14 days) in 4-VO rats, and for 7 days (3 days preventively+4 days therapeutically) in MCAO rats. Learning and memory behavioral performance was assayed using a Morris water maze test including a place navigation trial and a spatial probe trial. Brain histochemical morphology and hippocampal neuron survival was quantified using microscope assay of a puffin brain/hippocampus slice with cresyl violet staining. MCAO ischemia/reperfusion caused infarct damage in rat brain tissue. 4-VO ischemia/reperfusion caused a hippocampal neuronal lesion and learning and memory deficits in rats. Administration of RSSW (50, 100, and 200mg/kg) or EGb761 significantly reduced the size of the insulted brain hemisphere lesion and improved the neurological behavior of MCAO rats. In addition, RSSW markedly reduced an increase in the brain infarct volume from an I/R-induced MCAO and reduced the cerebral water content in a dose-dependent way. Administration of RSSW also increased the pyramidal neuronal density in the hippocampus of surviving rats after transient global brain ischemia and improved the learning and memory

  17. Teaching and learning the Hodgkin-Huxley model based on software developed in NEURON's programming language hoc.

    Science.gov (United States)

    Hernández, Oscar E; Zurek, Eduardo E

    2013-05-15

    We present a software tool called SENB, which allows the geometric and biophysical neuronal properties in a simple computational model of a Hodgkin-Huxley (HH) axon to be changed. The aim of this work is to develop a didactic and easy-to-use computational tool in the NEURON simulation environment, which allows graphical visualization of both the passive and active conduction parameters and the geometric characteristics of a cylindrical axon with HH properties. The SENB software offers several advantages for teaching and learning electrophysiology. First, SENB offers ease and flexibility in determining the number of stimuli. Second, SENB allows immediate and simultaneous visualization, in the same window and time frame, of the evolution of the electrophysiological variables. Third, SENB calculates parameters such as time and space constants, stimuli frequency, cellular area and volume, sodium and potassium equilibrium potentials, and propagation velocity of the action potentials. Furthermore, it allows the user to see all this information immediately in the main window. Finally, with just one click SENB can save an image of the main window as evidence. The SENB software is didactic and versatile, and can be used to improve and facilitate the teaching and learning of the underlying mechanisms in the electrical activity of an axon using the biophysical properties of the squid giant axon.

  18. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid

    Science.gov (United States)

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot. PMID:26998923

  19. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid.

    Science.gov (United States)

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot.

  20. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid.

    Directory of Open Access Journals (Sweden)

    Farhan Dawood

    Full Text Available Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot.

  1. Learning, memory, and the role of neural network architecture.

    Directory of Open Access Journals (Sweden)

    Ann M Hermundstad

    2011-06-01

    Full Text Available The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  2. A natural form of learning can increase and decrease the survival of new neurons in the dentate gyrus.

    Science.gov (United States)

    Olariu, Ana; Cleaver, Kathryn M; Shore, Lauren E; Brewer, Michelle D; Cameron, Heather A

    2005-01-01

    Granule cells born in the adult dentate gyrus undergo a 4-week developmental period characterized by high susceptibility to cell death. Two forms of hippocampus-dependent learning have been shown to rescue many of the new neurons during this critical period. Here, we show that a natural form of associative learning, social transmission of food preference (STFP), can either increase or decrease the survival of young granule cells in adult rats. Increased numbers of pyknotic as well as phospho-Akt-expressing BrdU-labeled cells were seen 1 day after STFP training, indicating that training rapidly induces both cell death and active suppression of cell death in different subsets. A single day of training for STFP increased the survival of 8-day-old BrdU-labeled cells when examined 1 week later. In contrast, 2 days of training decreased the survival of BrdU-labeled cells and the density of immature neurons, identified with crmp-4. This change from increased to decreased survival could not be accounted for by the ages of the cells. Instead, we propose that training may initially increase young granule cell survival, then, if continued, cause them to die. This complex regulation of cell death could potentially serve to maintain granule cells that are actively involved in memory consolidation, while rapidly using and discarding young granule cells whose training is complete to make space for new naïve neurons. Published 2005 Wiley-Liss, Inc.

  3. The island model for parallel implementation of evolutionary algorithm of Population-Based Incremental Learning (PBIL) optimization

    International Nuclear Information System (INIS)

    Lima, Alan M.M. de; Schirru, Roberto

    2000-01-01

    Genetic algorithms are biologically motivated adaptive systems which have been used, with good results, for function optimization. The purpose of this work is to introduce a new parallelization method to be applied to the Population-Based Incremental Learning (PBIL) algorithm. PBIL combines standard genetic algorithm mechanisms with simple competitive learning and has ben successfully used in combinatorial optimization problems. The development of this algorithm aims its application to the reload optimization of PWR nuclear reactors. Tests have been performed with combinatorial optimization problems similar to the reload problem. Results are compared to the serial PBIL ones, showing the new method's superiority and its viability as a tool for the nuclear core reload problem solution. (author)

  4. Towards deep learning with segregated dendrites.

    Science.gov (United States)

    Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A

    2017-12-05

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.

  5. Candidate glutamatergic neurons in the visual system of Drosophila.

    Directory of Open Access Journals (Sweden)

    Shamprasad Varija Raghu

    Full Text Available The visual system of Drosophila contains approximately 60,000 neurons that are organized in parallel, retinotopically arranged columns. A large number of these neurons have been characterized in great anatomical detail. However, studies providing direct evidence for synaptic signaling and the neurotransmitter used by individual neurons are relatively sparse. Here we present a first layout of neurons in the Drosophila visual system that likely release glutamate as their major neurotransmitter. We identified 33 different types of neurons of the lamina, medulla, lobula and lobula plate. Based on the previous Golgi-staining analysis, the identified neurons are further classified into 16 major subgroups representing lamina monopolar (L, transmedullary (Tm, transmedullary Y (TmY, Y, medulla intrinsic (Mi, Mt, Pm, Dm, Mi Am, bushy T (T, translobula plate (Tlp, lobula intrinsic (Lcn, Lt, Li, lobula plate tangential (LPTCs and lobula plate intrinsic (LPi cell types. In addition, we found 11 cell types that were not described by the previous Golgi analysis. This classification of candidate glutamatergic neurons fosters the future neurogenetic dissection of information processing in circuits of the fly visual system.

  6. Artificial Induction of Associative Olfactory Memory by Optogenetic and Thermogenetic Activation of Olfactory Sensory Neurons and Octopaminergic Neurons in Drosophila Larvae.

    Science.gov (United States)

    Honda, Takato; Lee, Chi-Yu; Honjo, Ken; Furukubo-Tokunaga, Katsuo

    2016-01-01

    The larval brain of Drosophila melanogaster provides an excellent system for the study of the neurocircuitry mechanism of memory. Recent development of neurogenetic techniques in fruit flies enables manipulations of neuronal activities in freely behaving animals. This protocol describes detailed steps for artificial induction of olfactory associative memory in Drosophila larvae. In this protocol, the natural reward signal is substituted by thermogenetic activation of octopaminergic neurons in the brain. In parallel, the odor signal is substituted by optogenetic activation of a specific class of olfactory receptor neurons. Association of reward and odor stimuli is achieved with the concomitant application of blue light and heat that leads to activation of both sets of neurons in living transgenic larvae. Given its operational simplicity and robustness, this method could be utilized to further our knowledge on the neurocircuitry mechanism of memory in the fly brain.

  7. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    Science.gov (United States)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  8. Enhancement of synchronized activity between hippocampal CA1 neurons during initial storage of associative fear memory.

    Science.gov (United States)

    Liu, Yu-Zhang; Wang, Yao; Shen, Weida; Wang, Zhiru

    2017-08-01

    Learning and memory storage requires neuronal plasticity induced in the hippocampus and other related brain areas, and this process is thought to rely on synchronized activity in neural networks. We used paired whole-cell recording in vivo to examine the synchronized activity that was induced in hippocampal CA1 neurons by associative fear learning. We found that both membrane potential synchronization and spike synchronization of CA1 neurons could be transiently enhanced after task learning, as observed on day 1 but not day 5. On day 1 after learning, CA1 neurons showed a decrease in firing threshold and rise times of suprathreshold membrane potential changes as well as an increase in spontaneous firing rates, possibly contributing to the enhancement of spike synchronization. The transient enhancement of CA1 neuronal synchronization may play important roles in the induction of neuronal plasticity for initial storage and consolidation of associative memory. The hippocampus is critical for memory acquisition and consolidation. This function requires activity- and experience-induced neuronal plasticity. It is known that neuronal plasticity is largely dependent on synchronized activity. As has been well characterized, repetitive correlated activity of presynaptic and postsynaptic neurons can lead to long-term modifications at their synapses. Studies on network activity have also suggested that memory processing in the hippocampus may involve learning-induced changes of neuronal synchronization, as observed in vivo between hippocampal CA3 and CA1 networks as well as between the rhinal cortex and the hippocampus. However, further investigation of learning-induced synchronized activity in the hippocampus is needed for a full understanding of hippocampal memory processing. In this study, by performing paired whole-cell recording in vivo on CA1 pyramidal cells (PCs) in anaesthetized adult rats, we examined CA1 neuronal synchronization before and after associative fear

  9. Auditory stimuli elicit hippocampal neuronal responses during sleep

    Directory of Open Access Journals (Sweden)

    Ekaterina eVinnik

    2012-06-01

    Full Text Available To investigate how hippocampal neurons code behaviorally salient stimuli, we recorded from neurons in the CA1 region of hippocampus in rats while they learned to associate the presence of sound with water reward. Rats learned to alternate between two reward ports at which, in 50 percent of the trials, sound stimuli were presented followed by water reward after a 3-second delay. Sound at the water port predicted subsequent reward delivery in 100 percent of the trials and the absence of sound predicted reward omission. During this task, 40% of recorded neurons fired differently according to which of the 2 reward ports the rat was visiting. A smaller fraction of neurons demonstrated onset response to sound/nosepoke (19% and reward delivery (24%. When the sounds were played during passive wakefulness, 8% of neurons responded with short latency onset responses; 25% of neurons responded to sounds when they were played during sleep. Based on the current findings and the results of previous experiments we propose the existence of two types of hippocampal neuronal responses to sounds: sound-onset responses with very short latency and longer-lasting sound-specific responses that are likely to be present when the animal is actively engaged in the task. During sleep the short-latency responses in hippocampus are intermingled with sustained activity which in the current experiment was detected for 1-2 seconds.

  10. Neural Plasticity: Single Neuron Models for Discrimination and Generalization and AN Experimental Ensemble Approach.

    Science.gov (United States)

    Munro, Paul Wesley

    A special form for modification of neuronal response properties is described in which the change in the synaptic state vector is parallel to the vector of afferent activity. This process is termed "parallel modification" and its theoretical and experimental implications are examined. A theoretical framework has been devised to describe the complementary functions of generalization and discrimination by single neurons. This constitutes a basis for three models each describing processes for the development of maximum selectivity (discrimination) and minimum selectivity (generalization) by neurons. Strengthening and weakening of synapses is expressed as a product of the presynaptic activity and a nonlinear modulatory function of two postsynaptic variables--namely a measure of the spatially integrated activity of the cell and a temporal integration (time-average) of that activity. Some theorems are given for low-dimensional systems and computer simulation results from more complex systems are discussed. Model neurons that achieve high selectivity mimic the development of cat visual cortex neurons in a wide variety of rearing conditions. A role for low-selectivity neurons is proposed in which they provide inhibitory input to neurons of the opposite type, thereby suppressing the common component of a pattern class and enhancing their selective properties. Such contrast-enhancing circuits are analyzed and supported by computer simulation. To enable maximum selectivity, the net inhibition to a cell must become strong enough to offset whatever excitation is produced by the non-preferred patterns. Ramifications of parallel models for certain experimental paradigms are analyzed. A methodology is outlined for testing synaptic modification hypotheses in the laboratory. A plastic projection from one neuronal population to another will attain stable equilibrium under periodic electrical stimulation of constant intensity. The perturbative effect of shifting this intensity level

  11. Mirror neuron system: basic findings and clinical applications.

    Science.gov (United States)

    Iacoboni, Marco; Mazziotta, John C

    2007-09-01

    In primates, ventral premotor and rostral inferior parietal neurons fire during the execution of hand and mouth actions. Some cells (called mirror neurons) also fire when hand and mouth actions are just observed. Mirror neurons provide a simple neural mechanism for understanding the actions of others. In humans, posterior inferior frontal and rostral inferior parietal areas have mirror properties. These human areas are relevant to imitative learning and social behavior. Indeed, the socially isolating condition of autism is associated with a deficit in mirror neuron areas. Strategies inspired by mirror neuron research recently have been used in the treatment of autism and in motor rehabilitation after stroke.

  12. On the sample complexity of learning for networks of spiking neurons with nonlinear synaptic interactions.

    Science.gov (United States)

    Schmitt, Michael

    2004-09-01

    We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.

  13. Learning and Parallelization Boost Constraint Search

    Science.gov (United States)

    Yun, Xi

    2013-01-01

    Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…

  14. Diverse Assessment and Active Student Engagement Sustain Deep Learning: A Comparative Study of Outcomes in Two Parallel Introductory Biochemistry Courses

    Science.gov (United States)

    Bevan, Samantha J.; Chan, Cecilia W. L.; Tanner, Julian A.

    2014-01-01

    Although there is increasing evidence for a relationship between courses that emphasize student engagement and achievement of student deep learning, there is a paucity of quantitative comparative studies in a biochemistry and molecular biology context. Here, we present a pedagogical study in two contrasting parallel biochemistry introductory…

  15. Sensorimotor learning and the ontogeny of the mirror neuron system

    OpenAIRE

    Catmur, C

    2013-01-01

    Mirror neurons, which have now been found in the human and songbird as well as the macaque, respond to both the observation and the performance of the same action. It has been suggested that their matching response properties have evolved as an adaptation for action understanding; alternatively, these properties may arise through sensorimotor experience. Here I review mirror neuron response characteristics from the perspective of ontogeny; I discuss the limited evidence for mirror neurons in ...

  16. Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    Science.gov (United States)

    Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi

    2012-10-01

    We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

  17. Mirror neurons: functions, mechanisms and models.

    Science.gov (United States)

    Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael A

    2013-04-12

    Mirror neurons for manipulation fire both when the animal manipulates an object in a specific way and when it sees another animal (or the experimenter) perform an action that is more or less similar. Such neurons were originally found in macaque monkeys, in the ventral premotor cortex, area F5 and later also in the inferior parietal lobule. Recent neuroimaging data indicate that the adult human brain is endowed with a "mirror neuron system," putatively containing mirror neurons and other neurons, for matching the observation and execution of actions. Mirror neurons may serve action recognition in monkeys as well as humans, whereas their putative role in imitation and language may be realized in human but not in monkey. This article shows the important role of computational models in providing sufficient and causal explanations for the observed phenomena involving mirror systems and the learning processes which form them, and underlines the need for additional circuitry to lift up the monkey mirror neuron circuit to sustain the posited cognitive functions attributed to the human mirror neuron system. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Parallel Education and Defining the Fourth Sector.

    Science.gov (United States)

    Chessell, Diana

    1996-01-01

    Parallel to the primary, secondary, postsecondary, and adult/community education sectors is education not associated with formal programs--learning in arts and cultural sites. The emergence of cultural and educational tourism is an opportunity for adult/community education to define itself by extending lifelong learning opportunities into parallel…

  19. Democratic population decisions result in robust policy-gradient learning: a parametric study with GPU simulations.

    Directory of Open Access Journals (Sweden)

    Paul Richmond

    2011-05-01

    Full Text Available High performance computing on the Graphics Processing Unit (GPU is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism, achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic" for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated.

  20. Synchrony detection and amplification by silicon neurons with STDP synapses.

    Science.gov (United States)

    Bofill-i-petit, Adria; Murray, Alan F

    2004-09-01

    Spike-timing dependent synaptic plasticity (STDP) is a form of plasticity driven by precise spike-timing differences between presynaptic and postsynaptic spikes. Thus, the learning rules underlying STDP are suitable for learning neuronal temporal phenomena such as spike-timing synchrony. It is well known that weight-independent STDP creates unstable learning processes resulting in balanced bimodal weight distributions. In this paper, we present a neuromorphic analog very large scale integration (VLSI) circuit that contains a feedforward network of silicon neurons with STDP synapses. The learning rule implemented can be tuned to have a moderate level of weight dependence. This helps stabilise the learning process and still generates binary weight distributions. From on-chip learning experiments we show that the chip can detect and amplify hierarchical spike-timing synchrony structures embedded in noisy spike trains. The weight distributions of the network emerging from learning are bimodal.

  1. Predictive models of glucose control: roles for glucose-sensing neurones

    Science.gov (United States)

    Kosse, C.; Gonzalez, A.; Burdakov, D.

    2018-01-01

    The brain can be viewed as a sophisticated control module for stabilizing blood glucose. A review of classical behavioural evidence indicates that central circuits add predictive (feedforward/anticipatory) control to the reactive (feedback/compensatory) control by peripheral organs. The brain/cephalic control is constructed and engaged, via associative learning, by sensory cues predicting energy intake or expenditure (e.g. sight, smell, taste, sound). This allows rapidly measurable sensory information (rather than slowly generated internal feedback signals, e.g. digested nutrients) to control food selection, glucose supply for fight-or-flight responses or preparedness for digestion/absorption. Predictive control is therefore useful for preventing large glucose fluctuations. We review emerging roles in predictive control of two classes of widely projecting hypothalamic neurones, orexin/hypocretin (ORX) and melanin-concentrating hormone (MCH) cells. Evidence is cited that ORX neurones (i) are activated by sensory cues (e.g. taste, sound), (ii) drive hepatic production, and muscle uptake, of glucose, via sympathetic nerves, (iii) stimulate wakefulness and exploration via global brain projections and (iv) are glucose-inhibited. MCH neurones are (i) glucose-excited, (ii) innervate learning and reward centres to promote synaptic plasticity, learning and memory and (iii) are critical for learning associations useful for predictive control (e.g. using taste to predict nutrient value of food). This evidence is unified into a model for predictive glucose control. During associative learning, inputs from some glucose-excited neurones may promote connections between the ‘fast’ senses and reward circuits, constructing neural shortcuts for efficient action selection. In turn, glucose-inhibited neurones may engage locomotion/exploration and coordinate the required fuel supply. Feedback inhibition of the latter neurones by glucose would ensure that glucose fluxes they

  2. Predictive models of glucose control: roles for glucose-sensing neurones.

    Science.gov (United States)

    Kosse, C; Gonzalez, A; Burdakov, D

    2015-01-01

    The brain can be viewed as a sophisticated control module for stabilizing blood glucose. A review of classical behavioural evidence indicates that central circuits add predictive (feedforward/anticipatory) control to the reactive (feedback/compensatory) control by peripheral organs. The brain/cephalic control is constructed and engaged, via associative learning, by sensory cues predicting energy intake or expenditure (e.g. sight, smell, taste, sound). This allows rapidly measurable sensory information (rather than slowly generated internal feedback signals, e.g. digested nutrients) to control food selection, glucose supply for fight-or-flight responses or preparedness for digestion/absorption. Predictive control is therefore useful for preventing large glucose fluctuations. We review emerging roles in predictive control of two classes of widely projecting hypothalamic neurones, orexin/hypocretin (ORX) and melanin-concentrating hormone (MCH) cells. Evidence is cited that ORX neurones (i) are activated by sensory cues (e.g. taste, sound), (ii) drive hepatic production, and muscle uptake, of glucose, via sympathetic nerves, (iii) stimulate wakefulness and exploration via global brain projections and (iv) are glucose-inhibited. MCH neurones are (i) glucose-excited, (ii) innervate learning and reward centres to promote synaptic plasticity, learning and memory and (iii) are critical for learning associations useful for predictive control (e.g. using taste to predict nutrient value of food). This evidence is unified into a model for predictive glucose control. During associative learning, inputs from some glucose-excited neurones may promote connections between the 'fast' senses and reward circuits, constructing neural shortcuts for efficient action selection. In turn, glucose-inhibited neurones may engage locomotion/exploration and coordinate the required fuel supply. Feedback inhibition of the latter neurones by glucose would ensure that glucose fluxes they stimulate

  3. [Development of intellect, emotion, and intentions, and their neuronal systems].

    Science.gov (United States)

    Segawa, Masaya

    2008-09-01

    Intellect, emotion and intentions, the major components of the human mentality, are neurologically correlated to memory and sensorimotor integration, the neuronal system consisting of the amygdale and hypothalamus, and motivation and learning, respectively. Development of these neuronal processes was evaluated by correlating the pathophysiologies of idiopathic developmental neuropsychiatric disorders and developmental courses of sleep parameters, sleep-wake rhythm (SWR), and locomotion. The memory system and sensory pathways develop by the 9th gestational months. Habituation or dorsal bundle extinction (DBE) develop after the 34th gestational week. In the first 4 months after birth, DBE is consolidated and fine tuning of the primary sensory cortex and its neuronal connection to the unimodal sensory association area along with functional lateralization of the cortex are accomplished. After 4 months, restriction of atonia in the REM stage enables the integrative function of the brain and induces synaptogenesis of the cortex around 6 months and locomotion in late infancy by activating the dopaminergic (DA) neurons induces synaptogenesis of the frontal cortex. Locomotion in early infancy involves functional specialization of the cortex and in childhood with development of biphasic SWR activation of the areas of the prefrontal cortex. Development of emotions reflects in the development of personal communication and the arousal function of the hypothalamus. The former is shown in the mother-child relationship in the first 4 months, in communication with adults and playmates in late infancy to early childhood, and in development of social relationships with sympathy by the early school age with functional maturation of the orbitofrontal cortex. The latter is demonstrated in the secretion of melatonin during night time by 4 months, in the circadian rhythm of body temperature by 8 months, and in the secretion of the growth hormone by 4-5 years with synchronization to the

  4. Towards a general theory of neural computation based on prediction by single neurons.

    Directory of Open Access Journals (Sweden)

    Christopher D Fiorillo

    Full Text Available Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise". A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of

  5. The origin and function of mirror neurons: the missing link.

    Science.gov (United States)

    Lingnau, Angelika; Caramazza, Alfonso

    2014-04-01

    We argue, by analogy to the neural organization of the object recognition system, that demonstration of modulation of mirror neurons by associative learning does not imply absence of genetic adaptation. Innate connectivity defines the types of processes mirror neurons can participate in while allowing for extensive local plasticity. However, the proper function of these neurons remains to be worked out.

  6. Learning-induced Dependence of Neuronal Activity in Primary Motor Cortex on Motor Task Condition.

    Science.gov (United States)

    Cai, X; Shimansky, Y; He, Jiping

    2005-01-01

    A brain-computer interface (BCI) system such as a cortically controlled robotic arm must have a capacity of adjusting its function to a specific environmental condition. We studied this capacity in non-human primates based on chronic multi-electrode recording from the primary motor cortex of a monkey during the animal's performance of a center-out 3D reaching task and adaptation to external force perturbations. The main condition-related feature of motor cortical activity observed before the onset of force perturbation was a phasic raise of activity immediately before the perturbation onset. This feature was observed during a series of perturbation trials, but were absent under no perturbations. After adaptation has been completed, it usually was taking the subject only one trial to recognize a change in the condition to switch the neuronal activity accordingly. These condition-dependent features of neuronal activity can be used by a BCI for recognizing a change in the environmental condition and making corresponding adjustments, which requires that the BCI-based control system possess such advanced properties of the neural motor control system as capacity to learn and adapt.

  7. Carotid chemoreceptors tune breathing via multipath routing: reticular chain and loop operations supported by parallel spike train correlations.

    Science.gov (United States)

    Morris, Kendall F; Nuding, Sarah C; Segers, Lauren S; Iceman, Kimberly E; O'Connor, Russell; Dean, Jay B; Ott, Mackenzie M; Alencar, Pierina A; Shuman, Dale; Horton, Kofi-Kermit; Taylor-Clark, Thomas E; Bolser, Donald C; Lindsey, Bruce G

    2018-02-01

    We tested the hypothesis that carotid chemoreceptors tune breathing through parallel circuit paths that target distinct elements of an inspiratory neuron chain in the ventral respiratory column (VRC). Microelectrode arrays were used to monitor neuronal spike trains simultaneously in the VRC, peri-nucleus tractus solitarius (p-NTS)-medial medulla, the dorsal parafacial region of the lateral tegmental field (FTL-pF), and medullary raphe nuclei together with phrenic nerve activity during selective stimulation of carotid chemoreceptors or transient hypoxia in 19 decerebrate, neuromuscularly blocked, and artificially ventilated cats. Of 994 neurons tested, 56% had a significant change in firing rate. A total of 33,422 cell pairs were evaluated for signs of functional interaction; 63% of chemoresponsive neurons were elements of at least one pair with correlational signatures indicative of paucisynaptic relationships. We detected evidence for postinspiratory neuron inhibition of rostral VRC I-Driver (pre-Bötzinger) neurons, an interaction predicted to modulate breathing frequency, and for reciprocal excitation between chemoresponsive p-NTS neurons and more downstream VRC inspiratory neurons for control of breathing depth. Chemoresponsive pericolumnar tonic expiratory neurons, proposed to amplify inspiratory drive by disinhibition, were correlationally linked to afferent and efferent "chains" of chemoresponsive neurons extending to all monitored regions. The chains included coordinated clusters of chemoresponsive FTL-pF neurons with functional links to widespread medullary sites involved in the control of breathing. The results support long-standing concepts on brain stem network architecture and a circuit model for peripheral chemoreceptor modulation of breathing with multiple circuit loops and chains tuned by tegmental field neurons with quasi-periodic discharge patterns. NEW & NOTEWORTHY We tested the long-standing hypothesis that carotid chemoreceptors tune the

  8. Synaptic neurotransmission depression in ventral tegmental dopamine neurons and cannabinoid-associated addictive learning.

    Science.gov (United States)

    Liu, Zhiqiang; Han, Jing; Jia, Lintao; Maillet, Jean-Christian; Bai, Guang; Xu, Lin; Jia, Zhengping; Zheng, Qiaohua; Zhang, Wandong; Monette, Robert; Merali, Zul; Zhu, Zhou; Wang, Wei; Ren, Wei; Zhang, Xia

    2010-12-20

    Drug addiction is an association of compulsive drug use with long-term associative learning/memory. Multiple forms of learning/memory are primarily subserved by activity- or experience-dependent synaptic long-term potentiation (LTP) and long-term depression (LTD). Recent studies suggest LTP expression in locally activated glutamate synapses onto dopamine neurons (local Glu-DA synapses) of the midbrain ventral tegmental area (VTA) following a single or chronic exposure to many drugs of abuse, whereas a single exposure to cannabinoid did not significantly affect synaptic plasticity at these synapses. It is unknown whether chronic exposure of cannabis (marijuana or cannabinoids), the most commonly used illicit drug worldwide, induce LTP or LTD at these synapses. More importantly, whether such alterations in VTA synaptic plasticity causatively contribute to drug addictive behavior has not previously been addressed. Here we show in rats that chronic cannabinoid exposure activates VTA cannabinoid CB1 receptors to induce transient neurotransmission depression at VTA local Glu-DA synapses through activation of NMDA receptors and subsequent endocytosis of AMPA receptor GluR2 subunits. A GluR2-derived peptide blocks cannabinoid-induced VTA synaptic depression and conditioned place preference, i.e., learning to associate drug exposure with environmental cues. These data not only provide the first evidence, to our knowledge, that NMDA receptor-dependent synaptic depression at VTA dopamine circuitry requires GluR2 endocytosis, but also suggest an essential contribution of such synaptic depression to cannabinoid-associated addictive learning, in addition to pointing to novel pharmacological strategies for the treatment of cannabis addiction.

  9. Synaptic neurotransmission depression in ventral tegmental dopamine neurons and cannabinoid-associated addictive learning.

    Directory of Open Access Journals (Sweden)

    Zhiqiang Liu

    2010-12-01

    Full Text Available Drug addiction is an association of compulsive drug use with long-term associative learning/memory. Multiple forms of learning/memory are primarily subserved by activity- or experience-dependent synaptic long-term potentiation (LTP and long-term depression (LTD. Recent studies suggest LTP expression in locally activated glutamate synapses onto dopamine neurons (local Glu-DA synapses of the midbrain ventral tegmental area (VTA following a single or chronic exposure to many drugs of abuse, whereas a single exposure to cannabinoid did not significantly affect synaptic plasticity at these synapses. It is unknown whether chronic exposure of cannabis (marijuana or cannabinoids, the most commonly used illicit drug worldwide, induce LTP or LTD at these synapses. More importantly, whether such alterations in VTA synaptic plasticity causatively contribute to drug addictive behavior has not previously been addressed. Here we show in rats that chronic cannabinoid exposure activates VTA cannabinoid CB1 receptors to induce transient neurotransmission depression at VTA local Glu-DA synapses through activation of NMDA receptors and subsequent endocytosis of AMPA receptor GluR2 subunits. A GluR2-derived peptide blocks cannabinoid-induced VTA synaptic depression and conditioned place preference, i.e., learning to associate drug exposure with environmental cues. These data not only provide the first evidence, to our knowledge, that NMDA receptor-dependent synaptic depression at VTA dopamine circuitry requires GluR2 endocytosis, but also suggest an essential contribution of such synaptic depression to cannabinoid-associated addictive learning, in addition to pointing to novel pharmacological strategies for the treatment of cannabis addiction.

  10. Synaptic Neurotransmission Depression in Ventral Tegmental Dopamine Neurons and Cannabinoid-Associated Addictive Learning

    Science.gov (United States)

    Liu, Zhiqiang; Han, Jing; Jia, Lintao; Maillet, Jean-Christian; Bai, Guang; Xu, Lin; Jia, Zhengping; Zheng, Qiaohua; Zhang, Wandong; Monette, Robert; Merali, Zul; Zhu, Zhou; Wang, Wei; Ren, Wei; Zhang, Xia

    2010-01-01

    Drug addiction is an association of compulsive drug use with long-term associative learning/memory. Multiple forms of learning/memory are primarily subserved by activity- or experience-dependent synaptic long-term potentiation (LTP) and long-term depression (LTD). Recent studies suggest LTP expression in locally activated glutamate synapses onto dopamine neurons (local Glu-DA synapses) of the midbrain ventral tegmental area (VTA) following a single or chronic exposure to many drugs of abuse, whereas a single exposure to cannabinoid did not significantly affect synaptic plasticity at these synapses. It is unknown whether chronic exposure of cannabis (marijuana or cannabinoids), the most commonly used illicit drug worldwide, induce LTP or LTD at these synapses. More importantly, whether such alterations in VTA synaptic plasticity causatively contribute to drug addictive behavior has not previously been addressed. Here we show in rats that chronic cannabinoid exposure activates VTA cannabinoid CB1 receptors to induce transient neurotransmission depression at VTA local Glu-DA synapses through activation of NMDA receptors and subsequent endocytosis of AMPA receptor GluR2 subunits. A GluR2-derived peptide blocks cannabinoid-induced VTA synaptic depression and conditioned place preference, i.e., learning to associate drug exposure with environmental cues. These data not only provide the first evidence, to our knowledge, that NMDA receptor-dependent synaptic depression at VTA dopamine circuitry requires GluR2 endocytosis, but also suggest an essential contribution of such synaptic depression to cannabinoid-associated addictive learning, in addition to pointing to novel pharmacological strategies for the treatment of cannabis addiction. PMID:21187978

  11. The effect of a selective neuronal nitric oxide synthase inhibitor 3-bromo 7-nitroindazole on spatial learning and memory in rats.

    Science.gov (United States)

    Gocmez, Semil Selcen; Yazir, Yusufhan; Sahin, Deniz; Karadenizli, Sabriye; Utkan, Tijen

    2015-04-01

    Since the discovery of nitric oxide (NO) as a neuronal messenger, its way to modulate learning and memory functions is subject of intense research. NO is an intercellular messenger in the central nervous system and is formed on demand through the conversion of L-arginine to L-citrulline via the enzyme nitric oxide synthase (NOS). Neuronal form of nitric oxide synthase may play an important role in a wide range of physiological and pathological conditions. Therefore the aim of this study was to investigate the effects of chronic 3-bromo 7-nitroindazole (3-Br 7-NI), specific neuronal nitric oxide synthase (nNOS) inhibitor, administration on spatial learning and memory performance in rats using the Morris water maze (MWM) paradigm. Male rats received either 3-Br 7-NI (20mg/kg/day) or saline via intraperitoneal injection for 5days. Daily administration of the specific neuronal nitric oxide synthase (nNOS) inhibitor, 3-Br 7-NI impaired the acquisition of the MWM task. 3-Br 7-NI also impaired the probe trial. The MWM training was associated with a significant increase in the brain-derived neurotrophic factor (BDNF) mRNA expression in the hippocampus. BDNF mRNA expression in the hippocampus did not change after 3-Br 7-NI treatment. L-arginine significantly reversed behavioural parameters, and the effect of 3-Br 7-NI was found to be NO-dependent. There were no differences in locomotor activity and blood pressure in 3-Br 7-NI treated rats. Our results may suggest that nNOS plays a key role in spatial memory formation in rats. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Relationship between neuronal network architecture and naming performance in temporal lobe epilepsy: A connectome based approach using machine learning.

    Science.gov (United States)

    Munsell, B C; Wu, G; Fridriksson, J; Thayer, K; Mofrad, N; Desisto, N; Shen, D; Bonilha, L

    2017-09-09

    Impaired confrontation naming is a common symptom of temporal lobe epilepsy (TLE). The neurobiological mechanisms underlying this impairment are poorly understood but may indicate a structural disorganization of broadly distributed neuronal networks that support naming ability. Importantly, naming is frequently impaired in other neurological disorders and by contrasting the neuronal structures supporting naming in TLE with other diseases, it will become possible to elucidate the common systems supporting naming. We aimed to evaluate the neuronal networks that support naming in TLE by using a machine learning algorithm intended to predict naming performance in subjects with medication refractory TLE using only the structural brain connectome reconstructed from diffusion tensor imaging. A connectome-based prediction framework was developed using network properties from anatomically defined brain regions across the entire brain, which were used in a multi-task machine learning algorithm followed by support vector regression. Nodal eigenvector centrality, a measure of regional network integration, predicted approximately 60% of the variance in naming. The nodes with the highest regression weight were bilaterally distributed among perilimbic sub-networks involving mainly the medial and lateral temporal lobe regions. In the context of emerging evidence regarding the role of large structural networks that support language processing, our results suggest intact naming relies on the integration of sub-networks, as opposed to being dependent on isolated brain areas. In the case of TLE, these sub-networks may be disproportionately indicative naming processes that are dependent semantic integration from memory and lexical retrieval, as opposed to multi-modal perception or motor speech production. Copyright © 2017. Published by Elsevier Inc.

  13. The Widrow-Hoff algorithm for McCulloch-Pitts type neurons.

    Science.gov (United States)

    Hui, S; Zak, S H

    1994-01-01

    We analyze the convergence properties of the Widrow-Hoff delta rule applied to McCulloch-Pitts type neurons. We give sufficiency conditions under which the learning parameters converge and conditions under which the learning parameters diverge. In particular, we analyze how the learning rate affects the convergence of the learning parameters.

  14. The HOX genes are expressed, in vivo, in human tooth germs: in vitro cAMP exposure of dental pulp cells results in parallel HOX network activation and neuronal differentiation.

    Science.gov (United States)

    D'Antò, Vincenzo; Cantile, Monica; D'Armiento, Maria; Schiavo, Giulia; Spagnuolo, Gianrico; Terracciano, Luigi; Vecchione, Raffaela; Cillo, Clemente

    2006-03-01

    Homeobox-containing genes play a crucial role in odontogenesis. After the detection of Dlx and Msx genes in overlapping domains along maxillary and mandibular processes, a homeobox odontogenic code has been proposed to explain the interaction between different homeobox genes during dental lamina patterning. No role has so far been assigned to the Hox gene network in the homeobox odontogenic code due to studies on specific Hox genes and evolutionary considerations. Despite its involvement in early patterning during embryonal development, the HOX gene network, the most repeat-poor regions of the human genome, controls the phenotype identity of adult eukaryotic cells. Here, according to our results, the HOX gene network appears to be active in human tooth germs between 18 and 24 weeks of development. The immunohistochemical localization of specific HOX proteins mostly concerns the epithelial tooth germ compartment. Furthermore, only a few genes of the network are active in embryonal retromolar tissues, as well as in ectomesenchymal dental pulp cells (DPC) grown in vitro from adult human molar. Exposure of DPCs to cAMP induces the expression of from three to nine total HOX genes of the network in parallel with phenotype modifications with traits of neuronal differentiation. Our observations suggest that: (i) by combining its component genes, the HOX gene network determines the phenotype identity of epithelial and ectomesenchymal cells interacting in the generation of human tooth germ; (ii) cAMP treatment activates the HOX network and induces, in parallel, a neuronal-like phenotype in human primary ectomesenchymal dental pulp cells. 2005 Wiley-Liss, Inc.

  15. Massively parallel evolutionary computation on GPGPUs

    CERN Document Server

    Tsutsui, Shigeyoshi

    2013-01-01

    Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms for running parallel EAs using multicore machines, massively parallel cluster machines, or grid computing environments. Recent advances in general-purpose computing on graphics processing units (GPGPU) have opened u

  16. The effects of short-term and long-term learning on the responses of lateral intraparietal neurons to visually presented objects.

    Science.gov (United States)

    Sigurdardottir, Heida M; Sheinberg, David L

    2015-07-01

    The lateral intraparietal area (LIP) is thought to play an important role in the guidance of where to look and pay attention. LIP can also respond selectively to differently shaped objects. We sought to understand to what extent short-term and long-term experience with visual orienting determines the responses of LIP to objects of different shapes. We taught monkeys to arbitrarily associate centrally presented objects of various shapes with orienting either toward or away from a preferred spatial location of a neuron. The training could last for less than a single day or for several months. We found that neural responses to objects are affected by such experience, but that the length of the learning period determines how this neural plasticity manifests. Short-term learning affects neural responses to objects, but these effects are only seen relatively late after visual onset; at this time, the responses to newly learned objects resemble those of familiar objects that share their meaning or arbitrary association. Long-term learning affects the earliest bottom-up responses to visual objects. These responses tend to be greater for objects that have been associated with looking toward, rather than away from, LIP neurons' preferred spatial locations. Responses to objects can nonetheless be distinct, although they have been similarly acted on in the past and will lead to the same orienting behavior in the future. Our results therefore indicate that a complete experience-driven override of LIP object responses may be difficult or impossible. We relate these results to behavioral work on visual attention.

  17. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  18. Pattern formation and firing synchronization in networks of map neurons

    International Nuclear Information System (INIS)

    Wang Qingyun; Duan Zhisheng; Huang Lin; Chen Guanrong; Lu Qishao

    2007-01-01

    Patterns and collective phenomena such as firing synchronization are studied in networks of nonhomogeneous oscillatory neurons and mixtures of oscillatory and excitable neurons, with dynamics of each neuron described by a two-dimensional (2D) Rulkov map neuron. It is shown that as the coupling strength is increased, typical patterns emerge spatially, which propagate through the networks in the form of beautiful target waves or parallel ones depending on the size of networks. Furthermore, we investigate the transitions of firing synchronization characterized by the rate of firing when the coupling strength is increased. It is found that there exists an intermediate coupling strength; firing synchronization is minimal simultaneously irrespective of the size of networks. For further increasing the coupling strength, synchronization is enhanced. Since noise is inevitable in real neurons, we also investigate the effects of white noise on firing synchronization for different networks. For the networks of oscillatory neurons, it is shown that firing synchronization decreases when the noise level increases. For the missed networks, firing synchronization is robust under the noise conditions considered in this paper. Results presented in this paper should prove to be valuable for understanding the properties of collective dynamics in real neuronal networks

  19. Engineering Computer Games: A Parallel Learning Opportunity for Undergraduate Engineering and Primary (K-5 Students

    Directory of Open Access Journals (Sweden)

    Mark Michael Budnik

    2011-04-01

    Full Text Available In this paper, we present how our College of Engineering is developing a growing portfolio of engineering computer games as a parallel learning opportunity for undergraduate engineering and primary (grade K-5 students. Around the world, many schools provide secondary students (grade 6-12 with opportunities to pursue pre-engineering classes. However, by the time students reach this age, many of them have already determined their educational goals and preferred careers. Our College of Engineering is developing resources to provide primary students, still in their educational formative years, with opportunities to learn more about engineering. One of these resources is a library of engineering games targeted to the primary student population. The games are designed by sophomore students in our College of Engineering. During their Introduction to Computational Techniques course, the students use the LabVIEW environment to develop the games. This software provides a wealth of design resources for the novice programmer; using it to develop the games strengthens the undergraduates

  20. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    Science.gov (United States)

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  1. Maggot Instructor: Semi-Automated Analysis of Learning and Memory in Drosophila Larvae

    Directory of Open Access Journals (Sweden)

    Urte Tomasiunaite

    2018-06-01

    Full Text Available For several decades, Drosophila has been widely used as a suitable model organism to study the fundamental processes of associative olfactory learning and memory. More recently, this condition also became true for the Drosophila larva, which has become a focus for learning and memory studies based on a number of technical advances in the field of anatomical, molecular, and neuronal analyses. The ongoing efforts should be mentioned to reconstruct the complete connectome of the larval brain featuring a total of about 10,000 neurons and the development of neurogenic tools that allow individual manipulation of each neuron. By contrast, standardized behavioral assays that are commonly used to analyze learning and memory in Drosophila larvae exhibit no such technical development. Most commonly, a simple assay with Petri dishes and odor containers is used; in this method, the animals must be manually transferred in several steps. The behavioral approach is therefore labor-intensive and limits the capacity to conduct large-scale genetic screenings in small laboratories. To circumvent these limitations, we introduce a training device called the Maggot Instructor. This device allows automatic training up to 10 groups of larvae in parallel. To achieve such goal, we used fully automated, computer-controlled optogenetic activation of single olfactory neurons in combination with the application of electric shocks. We showed that Drosophila larvae trained with the Maggot Instructor establish an odor-specific memory, which is independent of handling and non-associative effects. The Maggot Instructor will allow to investigate the large collections of genetically modified larvae in a short period and with minimal human resources. Therefore, the Maggot Instructor should be able to help extensive behavioral experiments in Drosophila larvae to keep up with the current technical advancements. In the longer term, this condition will lead to a better understanding of

  2. Neurons other than motor neurons in motor neuron disease.

    Science.gov (United States)

    Ruffoli, Riccardo; Biagioni, Francesca; Busceti, Carla L; Gaglione, Anderson; Ryskalin, Larisa; Gambardella, Stefano; Frati, Alessandro; Fornai, Francesco

    2017-11-01

    Amyotrophic lateral sclerosis (ALS) is typically defined by a loss of motor neurons in the central nervous system. Accordingly, morphological analysis for decades considered motor neurons (in the cortex, brainstem and spinal cord) as the neuronal population selectively involved in ALS. Similarly, this was considered the pathological marker to score disease severity ex vivo both in patients and experimental models. However, the concept of non-autonomous motor neuron death was used recently to indicate the need for additional cell types to produce motor neuron death in ALS. This means that motor neuron loss occurs only when they are connected with other cell types. This concept originally emphasized the need for resident glia as well as non-resident inflammatory cells. Nowadays, the additional role of neurons other than motor neurons emerged in the scenario to induce non-autonomous motor neuron death. In fact, in ALS neurons diverse from motor neurons are involved. These cells play multiple roles in ALS: (i) they participate in the chain of events to produce motor neuron loss; (ii) they may even degenerate more than and before motor neurons. In the present manuscript evidence about multi-neuronal involvement in ALS patients and experimental models is discussed. Specific sub-classes of neurons in the whole spinal cord are reported either to degenerate or to trigger neuronal degeneration, thus portraying ALS as a whole spinal cord disorder rather than a disease affecting motor neurons solely. This is associated with a novel concept in motor neuron disease which recruits abnormal mechanisms of cell to cell communication.

  3. The cerebellum: a neuronal learning machine?

    Science.gov (United States)

    Raymond, J. L.; Lisberger, S. G.; Mauk, M. D.

    1996-01-01

    Comparison of two seemingly quite different behaviors yields a surprisingly consistent picture of the role of the cerebellum in motor learning. Behavioral and physiological data about classical conditioning of the eyelid response and motor learning in the vestibulo-ocular reflex suggests that (i) plasticity is distributed between the cerebellar cortex and the deep cerebellar nuclei; (ii) the cerebellar cortex plays a special role in learning the timing of movement; and (iii) the cerebellar cortex guides learning in the deep nuclei, which may allow learning to be transferred from the cortex to the deep nuclei. Because many of the similarities in the data from the two systems typify general features of cerebellar organization, the cerebellar mechanisms of learning in these two systems may represent principles that apply to many motor systems.

  4. Central auditory neurons have composite receptive fields.

    Science.gov (United States)

    Kozlov, Andrei S; Gentner, Timothy Q

    2016-02-02

    High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes.

  5. Repeated Stimulation of Cultured Networks of Rat Cortical Neurons Induces Parallel Memory Traces

    Science.gov (United States)

    le Feber, Joost; Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and…

  6. Homemade Buckeye-Pi: A Learning Many-Node Platform for High-Performance Parallel Computing

    Science.gov (United States)

    Amooie, M. A.; Moortgat, J.

    2017-12-01

    We report on the "Buckeye-Pi" cluster, the supercomputer developed in The Ohio State University School of Earth Sciences from 128 inexpensive Raspberry Pi (RPi) 3 Model B single-board computers. Each RPi is equipped with fast Quad Core 1.2GHz ARMv8 64bit processor, 1GB of RAM, and 32GB microSD card for local storage. Therefore, the cluster has a total RAM of 128GB that is distributed on the individual nodes and a flash capacity of 4TB with 512 processors, while it benefits from low power consumption, easy portability, and low total cost. The cluster uses the Message Passing Interface protocol to manage the communications between each node. These features render our platform the most powerful RPi supercomputer to date and suitable for educational applications in high-performance-computing (HPC) and handling of large datasets. In particular, we use the Buckeye-Pi to implement optimized parallel codes in our in-house simulator for subsurface media flows with the goal of achieving a massively-parallelized scalable code. We present benchmarking results for the computational performance across various number of RPi nodes. We believe our project could inspire scientists and students to consider the proposed unconventional cluster architecture as a mainstream and a feasible learning platform for challenging engineering and scientific problems.

  7. Hydrocephalus compacted cortex and hippocampus and altered their output neurons in association with spatial learning and memory deficits in rats.

    Science.gov (United States)

    Chen, Li-Jin; Wang, Yueh-Jan; Chen, Jeng-Rung; Tseng, Guo-Fang

    2017-07-01

    Hydrocephalus is a common neurological disorder in children characterized by abnormal dilation of cerebral ventricles as a result of the impairment of cerebrospinal fluid flow or absorption. Clinical presentation of hydrocephalus varies with chronicity and often shows cognitive dysfunction. Here we used a kaolin-induction method in rats and studied the effects of hydrocephalus on cerebral cortex and hippocampus, the two regions highly related to cognition. Hydrocephalus impaired rats' performance in Morris water maze task. Serial three-dimensional reconstruction from sections of the whole brain freshly froze in situ with skull shows that the volumes of both structures were reduced. Morphologically, pyramidal neurons of the somatosensory cortex and hippocampus appear to be distorted. Intracellular dye injection and subsequent three-dimensional reconstruction and analyses revealed that the dendritic arbors of layer III and V cortical pyramid neurons were reduced. The total dendritic length of CA1, but not CA3, pyramidal neurons was also reduced. Dendritic spine densities on both cortical and hippocampal pyramidal neurons were decreased, consistent with our concomitant findings that the expressions of both synaptophysin and postsynaptic density protein 95 were reduced. These cortical and hippocampal changes suggest reductions of excitatory connectivity, which could underlie the learning and memory deficits in hydrocephalus. © 2016 International Society of Neuropathology.

  8. Attenuated Response to Methamphetamine Sensitization and Deficits in Motor Learning and Memory after Selective Deletion of [beta]-Catenin in Dopamine Neurons

    Science.gov (United States)

    Diaz-Ruiz, Oscar; Zhang, YaJun; Shan, Lufei; Malik, Nasir; Hoffman, Alexander F.; Ladenheim, Bruce; Cadet, Jean Lud; Lupica, Carl R.; Tagliaferro, Adriana; Brusco, Alicia; Backman, Cristina M.

    2012-01-01

    In the present study, we analyzed mice with a targeted deletion of [beta]-catenin in DA neurons (DA-[beta]cat KO mice) to address the functional significance of this molecule in the shaping of synaptic responses associated with motor learning and following exposure to drugs of abuse. Relative to controls, DA-[beta]cat KO mice showed significant…

  9. Reward inference by primate prefrontal and striatal neurons.

    Science.gov (United States)

    Pan, Xiaochuan; Fan, Hongwei; Sawa, Kosuke; Tsuda, Ichiro; Tsukada, Minoru; Sakagami, Masamichi

    2014-01-22

    The brain contains multiple yet distinct systems involved in reward prediction. To understand the nature of these processes, we recorded single-unit activity from the lateral prefrontal cortex (LPFC) and the striatum in monkeys performing a reward inference task using an asymmetric reward schedule. We found that neurons both in the LPFC and in the striatum predicted reward values for stimuli that had been previously well experienced with set reward quantities in the asymmetric reward task. Importantly, these LPFC neurons could predict the reward value of a stimulus using transitive inference even when the monkeys had not yet learned the stimulus-reward association directly; whereas these striatal neurons did not show such an ability. Nevertheless, because there were two set amounts of reward (large and small), the selected striatal neurons were able to exclusively infer the reward value (e.g., large) of one novel stimulus from a pair after directly experiencing the alternative stimulus with the other reward value (e.g., small). Our results suggest that although neurons that predict reward value for old stimuli in the LPFC could also do so for new stimuli via transitive inference, those in the striatum could only predict reward for new stimuli via exclusive inference. Moreover, the striatum showed more complex functions than was surmised previously for model-free learning.

  10. Parallel changes in cortical neuron biochemistry and motor function in protein-energy malnourished adult rats.

    Science.gov (United States)

    Alaverdashvili, Mariam; Hackett, Mark J; Caine, Sally; Paterson, Phyllis G

    2017-04-01

    While protein-energy malnutrition in the adult has been reported to induce motor abnormalities and exaggerate motor deficits caused by stroke, it is not known if alterations in mature cortical neurons contribute to the functional deficits. Therefore, we explored if PEM in adult rats provoked changes in the biochemical profile of neurons in the forelimb and hindlimb regions of the motor cortex. Fourier transform infrared spectroscopic imaging using a synchrotron generated light source revealed for the first time altered lipid composition in neurons and subcellular domains (cytosol and nuclei) in a cortical layer and region-specific manner. This change measured by the area under the curve of the δ(CH 2 ) band may indicate modifications in membrane fluidity. These PEM-induced biochemical changes were associated with the development of abnormalities in forelimb use and posture. The findings of this study provide a mechanism by which PEM, if not treated, could exacerbate the course of various neurological disorders and diminish treatment efficacy. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Energy-efficient neuron, synapse and STDP integrated circuits.

    Science.gov (United States)

    Cruz-Albrecht, Jose M; Yung, Michael W; Srinivasa, Narayan

    2012-06-01

    Ultra-low energy biologically-inspired neuron and synapse integrated circuits are presented. The synapse includes a spike timing dependent plasticity (STDP) learning rule circuit. These circuits have been designed, fabricated and tested using a 90 nm CMOS process. Experimental measurements demonstrate proper operation. The neuron and the synapse with STDP circuits have an energy consumption of around 0.4 pJ per spike and synaptic operation respectively.

  12. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator.

    Science.gov (United States)

    Wang, Runchun M; Thakur, Chetan S; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.

  13. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator

    Directory of Open Access Journals (Sweden)

    Runchun M. Wang

    2018-04-01

    Full Text Available This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons. This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.

  14. Learning tasks as a possible treatment for DNA lesions induced by oxidative stress in hippocampal neurons

    Institute of Scientific and Technical Information of China (English)

    DragoCrneci; Radu Silaghi-Dumitrescu

    2013-01-01

    Reactive oxygen species have been implicated in conditions ranging from cardiovascular dysfunc-tion, arthritis, cancer, to aging and age-related disorders. The organism developed several path-ways to counteract these effects, with base excision repair being responsible for repairing one of the major base lesions (8-oxoG) in al organisms. Epidemiological evidence suggests that cognitive stimulation makes the brain more resilient to damage or degeneration. Recent studies have linked enriched environment to reduction of oxidative stressin neurons of mice with Alzheimer’s dis-ease-like disease, but given its complexity it is not clear what specific aspect of enriched environ-ment has therapeutic effects. Studies from molecular biology have shown that the protein p300, which is a transcription co-activator required for consolidation of memories during specific learning tasks, is at the same time involved in DNA replication and repair, playing a central role in the long-patch pathway of base excision repair. Based on the evidence, we propose that learning tasks such as novel object recognition could be tested as possible methods of base excision repair faci-litation, hence inducing DNA repair in the hippocampal neurons. If this method proves to be effective, it could be the start for designing similar tasks for humans, as a behavioral therapeutic complement to the classical drug-based therapy in treating neurodegenerative disorders. This review presents the current status of therapeutic methods used in treating neurodegenerative diseases induced by reactive oxygen species and proposes a new approach based on existing data.

  15. A cerebellar learning model of vestibulo-ocular reflex adaptation in wild-type and mutant mice.

    Science.gov (United States)

    Clopath, Claudia; Badura, Aleksandra; De Zeeuw, Chris I; Brunel, Nicolas

    2014-05-21

    Mechanisms of cerebellar motor learning are still poorly understood. The standard Marr-Albus-Ito theory posits that learning involves plasticity at the parallel fiber to Purkinje cell synapses under control of the climbing fiber input, which provides an error signal as in classical supervised learning paradigms. However, a growing body of evidence challenges this theory, in that additional sites of plasticity appear to contribute to motor adaptation. Here, we consider phase-reversal training of the vestibulo-ocular reflex (VOR), a simple form of motor learning for which a large body of experimental data is available in wild-type and mutant mice, in which the excitability of granule cells or inhibition of Purkinje cells was affected in a cell-specific fashion. We present novel electrophysiological recordings of Purkinje cell activity measured in naive wild-type mice subjected to this VOR adaptation task. We then introduce a minimal model that consists of learning at the parallel fibers to Purkinje cells with the help of the climbing fibers. Although the minimal model reproduces the behavior of the wild-type animals and is analytically tractable, it fails at reproducing the behavior of mutant mice and the electrophysiology data. Therefore, we build a detailed model involving plasticity at the parallel fibers to Purkinje cells' synapse guided by climbing fibers, feedforward inhibition of Purkinje cells, and plasticity at the mossy fiber to vestibular nuclei neuron synapse. The detailed model reproduces both the behavioral and electrophysiological data of both the wild-type and mutant mice and allows for experimentally testable predictions. Copyright © 2014 the authors 0270-6474/14/347203-13$15.00/0.

  16. New Reflections on Mirror Neuron Research, the Tower of Babel, and Intercultural Education

    Science.gov (United States)

    Westbrook, Timothy Paul

    2015-01-01

    Studies of the human mirror neuron system demonstrate how mental mimicking of one's social environment affects learning. The mirror neuron system also has implications for intercultural encounters. This article explores the common ground between the mirror neuron system and theological principles from the Tower of Babel narrative and applies them…

  17. Cytokines and cytokine networks target neurons to modulate long-term potentiation.

    Science.gov (United States)

    Prieto, G Aleph; Cotman, Carl W

    2017-04-01

    Cytokines play crucial roles in the communication between brain cells including neurons and glia, as well as in the brain-periphery interactions. In the brain, cytokines modulate long-term potentiation (LTP), a cellular correlate of memory. Whether cytokines regulate LTP by direct effects on neurons or by indirect mechanisms mediated by non-neuronal cells is poorly understood. Elucidating neuron-specific effects of cytokines has been challenging because most brain cells express cytokine receptors. Moreover, cytokines commonly increase the expression of multiple cytokines in their target cells, thus increasing the complexity of brain cytokine networks even after single-cytokine challenges. Here, we review evidence on both direct and indirect-mediated modulation of LTP by cytokines. We also describe novel approaches based on neuron- and synaptosome-enriched systems to identify cytokines able to directly modulate LTP, by targeting neurons and synapses. These approaches can test multiple samples in parallel, thus allowing the study of multiple cytokines simultaneously. Hence, a cytokine networks perspective coupled with neuron-specific analysis may contribute to delineation of maps of the modulation of LTP by cytokines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Molecular fingerprint of neuropeptide S-producing neurons in the mouse brain

    DEFF Research Database (Denmark)

    Liu, Xiaobin; Zeng, Joanne; Zhou, Anni

    2011-01-01

    Neuropeptide S (NPS) has been associated with a number of complex brain functions, including anxiety-like behaviors, arousal, sleep-wakefulness regulation, drug-seeking behaviors, and learning and memory. In order to better understand how NPS influences these functions in a neuronal network context...... of incoming neurotransmission, controlling neuronal activity of NPS-producing neurons. Stress-induced functional activation of NPS-producing neurons was detected by staining for the immediate-early gene c-fos, thus supporting earlier findings that NPS might be part of the brain stress response network....

  19. Selective elimination of intracortically projecting neurons of the rat neocortex by prenatal x-irradiation

    International Nuclear Information System (INIS)

    Jensen, K.F.

    1981-01-01

    The development of new racing methods has suggested that there are species differences in the extent of the contribution of the different layers of the neocortex to the callosal projection. The present investigation has utilized prenatal x-irradiation to selectively eliminate the late forming neurons of the supragranular layers of the rat neocortex. The reduction in the neuronal population of the supragranular layers closely parallels the reduction in the corpus callosum. These results indicate that the primary source of neurons of the callosal projection, are the late forming neurons of the supragranular layers. Thus, the current results suggest that low dose prenatal x-irradiation may be used to evaluate important developmental events in the formation of neocortical circuitry

  20. Neuron-macrophage crosstalk in the intestine: a ‘microglia’ perspective

    Directory of Open Access Journals (Sweden)

    Simon eVerheijden

    2015-10-01

    Full Text Available Intestinal macrophages are strategically located in different layers of the intestine, including the mucosa, submucosa and muscularis externa, where they perform complex tasks to maintain intestinal homeostasis. As the gastrointestinal tract is continuously challenged by foreign antigens, macrophage activation should be tightly controlled to prevent chronic inflammation and tissue damage. Unraveling the precise cellular and molecular mechanisms underlying the tissue-specific control of macrophage activation is crucial to get more insight into intestinal immune regulation. Two recent reports provide unanticipated evidence that the enteric nervous system acts as a critical regulator of macrophage function in the myenteric plexus. Both studies clearly illustrate that enteric neurons reciprocally interact with intestinal macrophages and are actively involved in shaping their phenotype. This concept has striking parallels with the central nervous system (CNS, where neuronal signals maintain microglia, the resident macrophages of the CNS, in a quiescent, anti-inflammatory state. This inevitably evokes the perception that the ENS and CNS share mechanisms of neuroimmune interaction. In line, intestinal macrophages, both in the muscularis externa and (submucosa, express high levels of CX3CR1, a feature that was once believed to be unique for microglia. CX3CR1 is the sole receptor of fractalkine (CX3CL1, a factor mainly produced by neurons in the CNS to facilitate neuron-microglia communication. The striking parallels between resident macrophages of the brain and intestine might provide a promising new line of thought to get more insight into cellular and molecular mechanisms controlling macrophage activation in the gut.

  1. Sweet Taste and Nutrient Value Subdivide Rewarding Dopaminergic Neurons in Drosophila

    OpenAIRE

    Huetteroth, Wolf; Perisse, Emmanuel; Lin, Suewei; Klappenbach, Mart?n; Burke, Christopher; Waddell, Scott

    2015-01-01

    Dopaminergic neurons provide reward learning signals in mammals and insects. Recent work in Drosophila has demonstrated that water-reinforcing dopaminergic neurons are different to those for nutritious sugars. Here, we tested whether the sweet taste and nutrient properties of sugar reinforcement further subdivide the fly reward system. We found that dopaminergic neurons expressing the OAMB octopamine receptor specifically convey the short-term reinforcing effects of sweet taste. These dopamin...

  2. Confounding the origin and function of mirror neurons.

    Science.gov (United States)

    Rizzolatti, Giacomo

    2014-04-01

    Cook et al. argue that mirror neurons originate in sensorimotor associative learning and that their function is determined by their origin. Both these claims are hard to accept. It is here suggested that a major role in the origin of the mirror mechanism is played by top-down connections rather than by associative learning.

  3. Shifts in sensory neuron identity parallel differences in pheromone preference in the European corn borer

    Directory of Open Access Journals (Sweden)

    Fotini A Koutroumpa

    2014-10-01

    Full Text Available Pheromone communication relies on highly specific signals sent and received between members of the same species. However, how pheromone specificity is determined in moth olfactory circuits remains unknown. Here we provide the first glimpse into the mechanism that generates this specificity in Ostrinia nubilalis. In Ostrinia nubilalis it was found that a single locus causes strain-specific, diametrically opposed preferences for a 2-component pheromone blend. Previously we found pheromone preference to be correlated with the strain and hybrid-specific relative antennal response to both pheromone components. This led to the current study, in which we detail the underlying mechanism of this differential response, through chemotopically mapping of the pheromone detection circuit in the antenna. We determined that both strains and their hybrids have swapped the neuronal identity of the pheromone-sensitive neurons co-housed within a single sensillum. Furthermore, neurons that mediate behavioral antagonism surprisingly co-express up to five pheromone receptors, mirroring the concordantly broad tuning to heterospecific pheromones. This appears as possible evolutionary adaptation that could prevent cross attraction to a range of heterospecific signals, while keeping the pheromone detection system to its simplest tripartite setup.

  4. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  5. Rapid Integration of Artificial Sensory Feedback during Operant Conditioning of Motor Cortex Neurons.

    Science.gov (United States)

    Prsa, Mario; Galiñanes, Gregorio L; Huber, Daniel

    2017-02-22

    Neuronal motor commands, whether generating real or neuroprosthetic movements, are shaped by ongoing sensory feedback from the displacement being produced. Here we asked if cortical stimulation could provide artificial feedback during operant conditioning of cortical neurons. Simultaneous two-photon imaging and real-time optogenetic stimulation were used to train mice to activate a single neuron in motor cortex (M1), while continuous feedback of its activity level was provided by proportionally stimulating somatosensory cortex. This artificial signal was necessary to rapidly learn to increase the conditioned activity, detect correct performance, and maintain the learned behavior. Population imaging in M1 revealed that learning-related activity changes are observed in the conditioned cell only, which highlights the functional potential of individual neurons in the neocortex. Our findings demonstrate the capacity of animals to use an artificially induced cortical channel in a behaviorally relevant way and reveal the remarkable speed and specificity at which this can occur. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Spiking Neurons for Analysis of Patterns

    Science.gov (United States)

    Huntsberger, Terrance

    2008-01-01

    Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological

  7. Operant conditioning of synaptic and spiking activity patterns in single hippocampal neurons.

    Science.gov (United States)

    Ishikawa, Daisuke; Matsumoto, Nobuyoshi; Sakaguchi, Tetsuya; Matsuki, Norio; Ikegaya, Yuji

    2014-04-02

    Learning is a process of plastic adaptation through which a neural circuit generates a more preferable outcome; however, at a microscopic level, little is known about how synaptic activity is patterned into a desired configuration. Here, we report that animals can generate a specific form of synaptic activity in a given neuron in the hippocampus. In awake, head-restricted mice, we applied electrical stimulation to the lateral hypothalamus, a reward-associated brain region, when whole-cell patch-clamped CA1 neurons exhibited spontaneous synaptic activity that met preset criteria. Within 15 min, the mice learned to generate frequently the excitatory synaptic input pattern that satisfied the criteria. This reinforcement learning of synaptic activity was not observed for inhibitory input patterns. When a burst unit activity pattern was conditioned in paired and nonpaired paradigms, the frequency of burst-spiking events increased and decreased, respectively. The burst reinforcement occurred in the conditioned neuron but not in other adjacent neurons; however, ripple field oscillations were concomitantly reinforced. Neural conditioning depended on activation of NMDA receptors and dopamine D1 receptors. Acutely stressed mice and depression model mice that were subjected to forced swimming failed to exhibit the neural conditioning. This learning deficit was rescued by repetitive treatment with fluoxetine, an antidepressant. Therefore, internally motivated animals are capable of routing an ongoing action potential series into a specific neural pathway of the hippocampal network.

  8. ASSET: Analysis of Sequences of Synchronous Events in Massively Parallel Spike Trains

    Science.gov (United States)

    Canova, Carlos; Denker, Michael; Gerstein, George; Helias, Moritz

    2016-01-01

    With the ability to observe the activity from large numbers of neurons simultaneously using modern recording technologies, the chance to identify sub-networks involved in coordinated processing increases. Sequences of synchronous spike events (SSEs) constitute one type of such coordinated spiking that propagates activity in a temporally precise manner. The synfire chain was proposed as one potential model for such network processing. Previous work introduced a method for visualization of SSEs in massively parallel spike trains, based on an intersection matrix that contains in each entry the degree of overlap of active neurons in two corresponding time bins. Repeated SSEs are reflected in the matrix as diagonal structures of high overlap values. The method as such, however, leaves the task of identifying these diagonal structures to visual inspection rather than to a quantitative analysis. Here we present ASSET (Analysis of Sequences of Synchronous EvenTs), an improved, fully automated method which determines diagonal structures in the intersection matrix by a robust mathematical procedure. The method consists of a sequence of steps that i) assess which entries in the matrix potentially belong to a diagonal structure, ii) cluster these entries into individual diagonal structures and iii) determine the neurons composing the associated SSEs. We employ parallel point processes generated by stochastic simulations as test data to demonstrate the performance of the method under a wide range of realistic scenarios, including different types of non-stationarity of the spiking activity and different correlation structures. Finally, the ability of the method to discover SSEs is demonstrated on complex data from large network simulations with embedded synfire chains. Thus, ASSET represents an effective and efficient tool to analyze massively parallel spike data for temporal sequences of synchronous activity. PMID:27420734

  9. The timing of differentiation of adult hippocampal neurons is crucial for spatial memory.

    Directory of Open Access Journals (Sweden)

    Stefano Farioli-Vecchioli

    2008-10-01

    Full Text Available Adult neurogenesis in the dentate gyrus plays a critical role in hippocampus-dependent spatial learning. It remains unknown, however, how new neurons become functionally integrated into spatial circuits and contribute to hippocampus-mediated forms of learning and memory. To investigate these issues, we used a mouse model in which the differentiation of adult-generated dentate gyrus neurons can be anticipated by conditionally expressing the pro-differentiative gene PC3 (Tis21/BTG2 in nestin-positive progenitor cells. In contrast to previous studies that affected the number of newly generated neurons, this strategy selectively changes their timing of differentiation. New, adult-generated dentate gyrus progenitors, in which the PC3 transgene was expressed, showed accelerated differentiation and significantly reduced dendritic arborization and spine density. Functionally, this genetic manipulation specifically affected different hippocampus-dependent learning and memory tasks, including contextual fear conditioning, and selectively reduced synaptic plasticity in the dentate gyrus. Morphological and functional analyses of hippocampal neurons at different stages of differentiation, following transgene activation within defined time-windows, revealed that the new, adult-generated neurons up to 3-4 weeks of age are required not only to acquire new spatial information but also to use previously consolidated memories. Thus, the correct unwinding of these key memory functions, which can be an expression of the ability of adult-generated neurons to link subsequent events in memory circuits, is critically dependent on the correct timing of the initial stages of neuron maturation and connection to existing circuits.

  10. Memory formation orchestrates the wiring of adult-born hippocampal neurons into brain circuits.

    Science.gov (United States)

    Petsophonsakul, Petnoi; Richetin, Kevin; Andraini, Trinovita; Roybon, Laurent; Rampon, Claire

    2017-08-01

    During memory formation, structural rearrangements of dendritic spines provide a mean to durably modulate synaptic connectivity within neuronal networks. New neurons generated throughout the adult life in the dentate gyrus of the hippocampus contribute to learning and memory. As these neurons become incorporated into the network, they generate huge numbers of new connections that modify hippocampal circuitry and functioning. However, it is yet unclear as to how the dynamic process of memory formation influences their synaptic integration into neuronal circuits. New memories are established according to a multistep process during which new information is first acquired and then consolidated to form a stable memory trace. Upon recall, memory is transiently destabilized and vulnerable to modification. Using contextual fear conditioning, we found that learning was associated with an acceleration of dendritic spines formation of adult-born neurons, and that spine connectivity becomes strengthened after memory consolidation. Moreover, we observed that afferent connectivity onto adult-born neurons is enhanced after memory retrieval, while extinction training induces a change of spine shapes. Together, these findings reveal that the neuronal activity supporting memory processes strongly influences the structural dendritic integration of adult-born neurons into pre-existing neuronal circuits. Such change of afferent connectivity is likely to impact the overall wiring of hippocampal network, and consequently, to regulate hippocampal function.

  11. Social interaction with a tutor modulates responsiveness of specific auditory neurons in juvenile zebra finches.

    Science.gov (United States)

    Yanagihara, Shin; Yazaki-Sugiyama, Yoko

    2018-04-12

    Behavioral states of animals, such as observing the behavior of a conspecific, modify signal perception and/or sensations that influence state-dependent higher cognitive behavior, such as learning. Recent studies have shown that neuronal responsiveness to sensory signals is modified when animals are engaged in social interactions with others or in locomotor activities. However, how these changes produce state-dependent differences in higher cognitive function is still largely unknown. Zebra finches, which have served as the premier songbird model, learn to sing from early auditory experiences with tutors. They also learn from playback of recorded songs however, learning can be greatly improved when song models are provided through social communication with tutors (Eales, 1989; Chen et al., 2016). Recently we found a subset of neurons in the higher-level auditory cortex of juvenile zebra finches that exhibit highly selective auditory responses to the tutor song after song learning, suggesting an auditory memory trace of the tutor song (Yanagihara and Yazaki-Sugiyama, 2016). Here we show that auditory responses of these selective neurons became greater when juveniles were paired with their tutors, while responses of non-selective neurons did not change. These results suggest that social interaction modulates cortical activity and might function in state-dependent song learning. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Value learning through reinforcement : The basics of dopamine and reinforcement learning

    NARCIS (Netherlands)

    Daw, N.D.; Tobler, P.N.; Glimcher, P.W.; Fehr, E.

    2013-01-01

    This chapter provides an overview of reinforcement learning and temporal difference learning and relates these topics to the firing properties of midbrain dopamine neurons. First, we review the RescorlaWagner learning rule and basic learning phenomena, such as blocking, which the rule explains. Then

  13. BAF53b, a Neuron-Specific Nucleosome Remodeling Factor, Is Induced after Learning and Facilitates Long-Term Memory Consolidation.

    Science.gov (United States)

    Yoo, Miran; Choi, Kwang-Yeon; Kim, Jieun; Kim, Mujun; Shim, Jaehoon; Choi, Jun-Hyeok; Cho, Hye-Yeon; Oh, Jung-Pyo; Kim, Hyung-Su; Kaang, Bong-Kiun; Han, Jin-Hee

    2017-03-29

    Although epigenetic mechanisms of gene expression regulation have recently been implicated in memory consolidation and persistence, the role of nucleosome-remodeling is largely unexplored. Recent studies show that the functional loss of BAF53b, a postmitotic neuron-specific subunit of the BAF nucleosome-remodeling complex, results in the deficit of consolidation of hippocampus-dependent memory and cocaine-associated memory in the rodent brain. However, it is unclear whether BAF53b expression is regulated during memory formation and how BAF53b regulates fear memory in the amygdala, a key brain site for fear memory encoding and storage. To address these questions, we used viral vector approaches to either decrease or increase BAF53b function specifically in the lateral amygdala of adult mice in auditory fear conditioning paradigm. Knockdown of Baf53b before training disrupted long-term memory formation with no effect on short-term memory, basal synaptic transmission, and spine structures. We observed in our qPCR analysis that BAF53b was induced in the lateral amygdala neurons at the late consolidation phase after fear conditioning. Moreover, transient BAF53b overexpression led to persistently enhanced memory formation, which was accompanied by increase in thin-type spine density. Together, our results provide the evidence that BAF53b is induced after learning, and show that such increase of BAF53b level facilitates memory consolidation likely by regulating learning-related spine structural plasticity. SIGNIFICANCE STATEMENT Recent works in the rodent brain begin to link nucleosome remodeling-dependent epigenetic mechanism to memory consolidation. Here we show that BAF53b, an epigenetic factor involved in nucleosome remodeling, is induced in the lateral amygdala neurons at the late phase of consolidation after fear conditioning. Using specific gene knockdown or overexpression approaches, we identify the critical role of BAF53b in the lateral amygdala neurons for

  14. Facilitation of the main generator source of earthworm muscle contraction by a peripheral neuron

    Directory of Open Access Journals (Sweden)

    Chang Y.C.

    1998-01-01

    Full Text Available A constant facilitation of responses evoked in the earthworm muscle contraction generator neurons by responses evoked in the neurons of its peripheral nervous system was demonstrated. It is based on the proposal that these two responses are bifurcations of an afferent response evoked by the same peripheral mechanical stimulus but converging again on this central neuron. A single-peaked generator response without facilitation was demonstrated by sectioning the afferent route of the peripheral facilitatory modulatory response, or conditioning response (CR. The multipeaked response could be restored by restimulating the sectioned modulatory neuron with an intracellular substitutive conditioning stimulus (SCS. These multi-peaked responses were proposed to be the result of reverberating the original single peaked unconditioned response (UR through a parallel (P neuronal circuit which receives the facilitation of the peripheral modulatory neuron. This peripheral modulatory neuron was named "Peri-Kästchen" (PK neuron because it has about 20 peripheral processes distributed on the surface of a Kästchen of longitudinal muscle cells on the body wall of this preparation as revealed by the Lucifer Yellow-CH-filling method.

  15. Neuronal regulation of homeostasis by nutrient sensing.

    Science.gov (United States)

    Lam, Tony K T

    2010-04-01

    In type 2 diabetes and obesity, the homeostatic control of glucose and energy balance is impaired, leading to hyperglycemia and hyperphagia. Recent studies indicate that nutrient-sensing mechanisms in the body activate negative-feedback systems to regulate energy and glucose homeostasis through a neuronal network. Direct metabolic signaling within the intestine activates gut-brain and gut-brain-liver axes to regulate energy and glucose homeostasis, respectively. In parallel, direct metabolism of nutrients within the hypothalamus regulates food intake and blood glucose levels. These findings highlight the importance of the central nervous system in mediating the ability of nutrient sensing to maintain homeostasis. Futhermore, they provide a physiological and neuronal framework by which enhancing or restoring nutrient sensing in the intestine and the brain could normalize energy and glucose homeostasis in diabetes and obesity.

  16. The Age of Enlightenment: Evolving Opportunities in Brain Research Through Optical Manipulation of Neuronal Activity

    OpenAIRE

    Jerome, Jason; Heck, Detlef H.

    2011-01-01

    Optical manipulation of neuronal activity has rapidly developed into the most powerful and widely used approach to study mechanisms related to neuronal connectivity over a range of scales. Since the early use of single site uncaging to map network connectivity, rapid technological development of light modulation techniques has added important new options, such as fast scanning photostimulation, massively parallel control of light stimuli, holographic uncaging, and two-photon stimulation techn...

  17. Mathematical Abstraction: Constructing Concept of Parallel Coordinates

    Science.gov (United States)

    Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.

    2017-09-01

    Mathematical abstraction is an important process in teaching and learning mathematics so pre-service mathematics teachers need to understand and experience this process. One of the theoretical-methodological frameworks for studying this process is Abstraction in Context (AiC). Based on this framework, abstraction process comprises of observable epistemic actions, Recognition, Building-With, Construction, and Consolidation called as RBC + C model. This study investigates and analyzes how pre-service mathematics teachers constructed and consolidated concept of Parallel Coordinates in a group discussion. It uses AiC framework for analyzing mathematical abstraction of a group of pre-service teachers consisted of four students in learning Parallel Coordinates concepts. The data were collected through video recording, students’ worksheet, test, and field notes. The result shows that the students’ prior knowledge related to concept of the Cartesian coordinate has significant role in the process of constructing Parallel Coordinates concept as a new knowledge. The consolidation process is influenced by the social interaction between group members. The abstraction process taken place in this group were dominated by empirical abstraction that emphasizes on the aspect of identifying characteristic of manipulated or imagined object during the process of recognizing and building-with.

  18. Effect of Cistanche Desertica Polysaccharides on Learning and Memory Functions and Ultrastructure of Cerebral Neurons in Experimental Aging Mice

    Institute of Scientific and Technical Information of China (English)

    孙云; 邓杨梅; 王德俊; 沈春锋; 刘晓梅; 张洪泉

    2001-01-01

    To observe the effects of Cistanche desertica polysaccharides (CDP) on the learning and memory functions and cerebral ultrastructure in experimental aging mice. Methods: CDP was administrated intragastrically 50 or 100 mg/kg per day for 64 successive days to experimental aging model mice induced by D-galactose, then the learning and memory functions of mice were estimated by step-down test and Y-maze test; organelles of brain tissue and cerebral ultrastructure were observed by transmission electron microscope and physical strength was determined by swimming test. Results: CDP could obviously enhance the learning and memory functions (P<0.01) and prolong the swimming time (P<0.05), decrease the number of lipofuscin and slow down the degeneration of mitochondria in neurons(P<0.05), and improve the degeneration of cerebral ultra-structure in aging mice. Conclusion: CDP could improve the impaired physiological function and alleviate cerebral morphological change in experimental aging mice.

  19. Distributed Cerebellar Motor Learning; a Spike-Timing-Dependent Plasticity Model

    Directory of Open Access Journals (Sweden)

    Niceto Rafael Luque

    2016-03-01

    Full Text Available Deep cerebellar nuclei neurons receive both inhibitory (GABAergic synaptic currents from Purkinje cells (within the cerebellar cortex and excitatory (glutamatergic synaptic currents from mossy fibres. Those two deep cerebellar nucleus inputs are thought to be also adaptive, embedding interesting properties in the framework of accurate movements. We show that distributed spike-timing-dependent plasticity mechanisms (STDP located at different cerebellar sites (parallel fibres to Purkinje cells, mossy fibres to deep cerebellar nucleus cells, and Purkinje cells to deep cerebellar nucleus cells in close-loop simulations provide an explanation for the complex learning properties of the cerebellum in motor learning. Concretely, we propose a new mechanistic cerebellar spiking model. In this new model, deep cerebellar nuclei embed a dual functionality: deep cerebellar nuclei acting as a gain adaptation mechanism and as a facilitator for the slow memory consolidation at mossy fibres to deep cerebellar nucleus synapses. Equipping the cerebellum with excitatory (e-STDP and inhibitory (i-STDP mechanisms at deep cerebellar nuclei afferents allows the accommodation of synaptic memories that were formed at parallel fibres to Purkinje cells synapses and then transferred to mossy fibres to deep cerebellar nucleus synapses. These adaptive mechanisms also contribute to modulate the deep-cerebellar-nucleus-output firing rate (output gain modulation towards optimising its working range.

  20. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  1. Hemispheric dominance underlying the neural substrate for learned vocalizations develops with experience.

    Science.gov (United States)

    Chirathivat, Napim; Raja, Sahitya C; Gobes, Sharon M H

    2015-06-22

    Many aspects of song learning in songbirds resemble characteristics of speech acquisition in humans. Genetic, anatomical and behavioural parallels have most recently been extended with demonstrated similarities in hemispheric dominance between humans and songbirds: the avian higher order auditory cortex is left-lateralized for processing song memories in juvenile zebra finches that already have formed a memory of their fathers' song, just like Wernicke's area in the left hemisphere of the human brain is dominant for speech perception. However, it is unclear if hemispheric specialization is due to pre-existing functional asymmetry or the result of learning itself. Here we show that in juvenile male and female zebra finches that had never heard an adult song before, neuronal activation after initial exposure to a conspecific song is bilateral. Thus, like in humans, hemispheric dominance develops with vocal proficiency. A left-lateralized functional system that develops through auditory-vocal learning may be an evolutionary adaptation that could increase the efficiency of transferring information within one hemisphere, benefiting the production and perception of learned communication signals.

  2. Failure of Neuronal Maturation in Alzheimer Disease Dentate Gyrus

    Science.gov (United States)

    Li, Bin; Yamamori, Hidenaga; Tatebayashi, Yoshitaka; Shafit-Zagardo, Bridget; Tanimukai, Hitoshi; Chen, She; Iqbal, Khalid; Grundke-Iqbal, Inge

    2011-01-01

    The dentate gyrus, an important anatomic structure of the hippocampal formation, is one of the major areas in which neurogenesis takes place in the adult mammalian brain. Neurogenesis in the dentate gyrus is thought to play an important role in hippocampus-dependent learning and memory. Neurogenesis has been reported to be increased in the dentate gyrus of patients with Alzheimer disease, but it is not known whether the newly generated neurons differentiate into mature neurons. In this study, the expression of the mature neuronal marker high molecular weight microtubule-associated protein (MAP) isoforms MAP2a and b was found to be dramatically decreased in Alzheimer disease dentate gyrus, as determined by immunohistochemistry and in situ hybridization. The total MAP2, including expression of the immature neuronal marker, the MAP2c isoform, was less affected. These findings suggest that newly generated neurons in Alzheimer disease dentate gyrus do not become mature neurons, although neuroproliferation is increased. PMID:18091557

  3. The endogenous alkaloid harmane: acidifying and activity-reducing effects on hippocampal neurons in vitro.

    Science.gov (United States)

    Bonnet, Udo; Scherbaum, Norbert; Wiemann, Martin

    2008-02-15

    The endogenous alkaloid harmane is enriched in plasma of patients with neurodegenerative or addictive disorders. As harmane affects neuronal activity and viability and because both parameters are strongly influenced by intracellular pH (pH(i)), we tested whether effects of harmane are correlated with altered pH(i) regulation. Pyramidal neurons in the CA3 field of hippocampal slices were investigated under bicarbonate-buffered conditions. Harmane (50 and 100 microM) reversibly decreased spontaneous firing of action potentials and caffeine-induced bursting of CA3 neurons. In parallel experiments, 50 and 100 microM harmane evoked a neuronal acidification of 0.12+/-0.08 and 0.18+/-0.07 pH units, respectively. Recovery from intracellular acidification subsequent to an ammonium prepulse was also impaired, suggesting an inhibition of transmembrane acid extrusion by harmane. Harmane may modulate neuronal functions via altered pH(i)-regulation. Implications of these findings for neuronal survival are discussed.

  4. Learning to understand others' actions.

    Science.gov (United States)

    Press, Clare; Heyes, Cecilia; Kilner, James M

    2011-06-23

    Despite nearly two decades of research on mirror neurons, there is still much debate about what they do. The most enduring hypothesis is that they enable 'action understanding'. However, recent critical reviews have failed to find compelling evidence in favour of this view. Instead, these authors argue that mirror neurons are produced by associative learning and therefore that they cannot contribute to action understanding. The present opinion piece suggests that this argument is flawed. We argue that mirror neurons may both develop through associative learning and contribute to inferences about the actions of others.

  5. Neuronal plasticity and multisensory integration in filial imprinting.

    Science.gov (United States)

    Town, Stephen Michael; McCabe, Brian John

    2011-03-10

    Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus.

  6. Neuronal Plasticity and Multisensory Integration in Filial Imprinting

    Science.gov (United States)

    Town, Stephen Michael; McCabe, Brian John

    2011-01-01

    Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus. PMID:21423770

  7. Quantum learning algorithms for quantum measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bisio, Alessandro, E-mail: alessandro.bisio@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); D' Ariano, Giacomo Mauro, E-mail: dariano@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Perinotti, Paolo, E-mail: paolo.perinotti@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Sedlak, Michal, E-mail: michal.sedlak@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia)

    2011-09-12

    We study quantum learning algorithms for quantum measurements. The optimal learning algorithm is derived for arbitrary von Neumann measurements in the case of training with one or two examples. The analysis of the case of three examples reveals that, differently from the learning of unitary gates, the optimal algorithm for learning of quantum measurements cannot be parallelized, and requires quantum memories for the storage of information. -- Highlights: → Optimal learning algorithm for von Neumann measurements. → From 2 copies to 1 copy: the optimal strategy is parallel. → From 3 copies to 1 copy: the optimal strategy must be non-parallel.

  8. Quantum learning algorithms for quantum measurements

    International Nuclear Information System (INIS)

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Sedlak, Michal

    2011-01-01

    We study quantum learning algorithms for quantum measurements. The optimal learning algorithm is derived for arbitrary von Neumann measurements in the case of training with one or two examples. The analysis of the case of three examples reveals that, differently from the learning of unitary gates, the optimal algorithm for learning of quantum measurements cannot be parallelized, and requires quantum memories for the storage of information. -- Highlights: → Optimal learning algorithm for von Neumann measurements. → From 2 copies to 1 copy: the optimal strategy is parallel. → From 3 copies to 1 copy: the optimal strategy must be non-parallel.

  9. Toxoplasma gondii Actively Inhibits Neuronal Function in Chronically Infected Mice

    Science.gov (United States)

    Haroon, Fahad; Händel, Ulrike; Angenstein, Frank; Goldschmidt, Jürgen; Kreutzmann, Peter; Lison, Holger; Fischer, Klaus-Dieter; Scheich, Henning; Wetzel, Wolfram; Schlüter, Dirk; Budinger, Eike

    2012-01-01

    Upon infection with the obligate intracellular parasite Toxoplasma gondii, fast replicating tachyzoites infect a broad spectrum of host cells including neurons. Under the pressure of the immune response, tachyzoites convert into slow-replicating bradyzoites, which persist as cysts in neurons. Currently, it is unclear whether T. gondii alters the functional activity of neurons, which may contribute to altered behaviour of T. gondii–infected mice and men. In the present study we demonstrate that upon oral infection with T. gondii cysts, chronically infected BALB/c mice lost over time their natural fear against cat urine which was paralleled by the persistence of the parasite in brain regions affecting behaviour and odor perception. Detailed immunohistochemistry showed that in infected neurons not only parasitic cysts but also the host cell cytoplasm and some axons stained positive for Toxoplasma antigen suggesting that parasitic proteins might directly interfere with neuronal function. In fact, in vitro live cell calcium (Ca2+) imaging studies revealed that tachyzoites actively manipulated Ca2+ signalling upon glutamate stimulation leading either to hyper- or hypo-responsive neurons. Experiments with the endoplasmatic reticulum Ca2+ uptake inhibitor thapsigargin indicate that tachyzoites deplete Ca2+ stores in the endoplasmatic reticulum. Furthermore in vivo studies revealed that the activity-dependent uptake of the potassium analogue thallium was reduced in cyst harbouring neurons indicating their functional impairment. The percentage of non-functional neurons increased over time In conclusion, both bradyzoites and tachyzoites functionally silence infected neurons, which may significantly contribute to the altered behaviour of the host. PMID:22530040

  10. Toxoplasma gondii actively inhibits neuronal function in chronically infected mice.

    Directory of Open Access Journals (Sweden)

    Fahad Haroon

    Full Text Available Upon infection with the obligate intracellular parasite Toxoplasma gondii, fast replicating tachyzoites infect a broad spectrum of host cells including neurons. Under the pressure of the immune response, tachyzoites convert into slow-replicating bradyzoites, which persist as cysts in neurons. Currently, it is unclear whether T. gondii alters the functional activity of neurons, which may contribute to altered behaviour of T. gondii-infected mice and men. In the present study we demonstrate that upon oral infection with T. gondii cysts, chronically infected BALB/c mice lost over time their natural fear against cat urine which was paralleled by the persistence of the parasite in brain regions affecting behaviour and odor perception. Detailed immunohistochemistry showed that in infected neurons not only parasitic cysts but also the host cell cytoplasm and some axons stained positive for Toxoplasma antigen suggesting that parasitic proteins might directly interfere with neuronal function. In fact, in vitro live cell calcium (Ca(2+ imaging studies revealed that tachyzoites actively manipulated Ca(2+ signalling upon glutamate stimulation leading either to hyper- or hypo-responsive neurons. Experiments with the endoplasmatic reticulum Ca(2+ uptake inhibitor thapsigargin indicate that tachyzoites deplete Ca(2+ stores in the endoplasmatic reticulum. Furthermore in vivo studies revealed that the activity-dependent uptake of the potassium analogue thallium was reduced in cyst harbouring neurons indicating their functional impairment. The percentage of non-functional neurons increased over time In conclusion, both bradyzoites and tachyzoites functionally silence infected neurons, which may significantly contribute to the altered behaviour of the host.

  11. Modulation of neuronal network activity with ghrelin

    NARCIS (Netherlands)

    Stoyanova, Irina; Rutten, Wim; le Feber, Jakob

    2012-01-01

    Ghrelin is a neuropeptide regulating multiple physiological processes, including high brain functions such as learning and memory formation. However, the effect of ghrelin on network activity patterns and developments has not been studied yet. Therefore, we used dissociated cortical neurons plated

  12. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan; Naous, Rawan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.

    2015-01-01

    . Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards

  13. Associative learning is necessary but not sufficient for mirror neuron development

    OpenAIRE

    Bonaiuto, James

    2014-01-01

    Existing computational models of the mirror system demonstrate the additional circuitry needed for mirror neurons to display the range of properties that they exhibit. Such models emphasize the need for existing connectivity to form visuomotor associations, processing to reduce the space of possible inputs, and demonstrate the role neurons with mirror properties might play in monitoring one's own actions.

  14. Associative learning is necessary but not sufficient for mirror neuron development.

    Science.gov (United States)

    Bonaiuto, James

    2014-04-01

    Existing computational models of the mirror system demonstrate the additional circuitry needed for mirror neurons to display the range of properties that they exhibit. Such models emphasize the need for existing connectivity to form visuomotor associations, processing to reduce the space of possible inputs, and demonstrate the role neurons with mirror properties might play in monitoring one's own actions.

  15. Military Curricula for Vocational & Technical Education. Basic Electricity and Electronics Individualized Learning System. CANTRAC A-100-0010. Module Six: Parallel Circuits. Study Booklet.

    Science.gov (United States)

    Chief of Naval Education and Training Support, Pensacola, FL.

    This individualized learning module on parallel circuits is one in a series of modules for a course in basic electricity and electronics. The course is one of a number of military-developed curriculum packages selected for adaptation to vocational instructional and curriculum development in a civilian setting. Four lessons are included in the…

  16. Repeated Blockade of NMDA Receptors during Adolescence Impairs Reversal Learning and Disrupts GABAergic Interneurons in Rat Medial Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Jitao eLi

    2016-03-01

    Full Text Available Adolescence is of particular significance to schizophrenia, since psychosis onset typically occurs in this critical period. Based on the N-methyl-D-aspartate (NMDA receptor hypofunction hypothesis of schizophrenia, in this study, we investigated whether and how repeated NMDA receptor blockade during adolescence would affect GABAergic interneurons in rat medial prefrontal cortex (mPFC and mPFC-mediated cognitive functions. Specifically, adolescent rats were subjected to intraperitoneal administration of MK-801 (0.1, 0.2, 0.4 mg/kg, a non-competitive NMDA receptor antagonist, for 14 days and then tested for reference memory and reversal learning in the water maze. The density of parvabumin (PV-, calbindin (CB- and calretinin (CR-positive neurons in mPFC were analyzed at either 24 hours or 7 days after drug cessation. We found that MK-801 treatment delayed reversal learning in the water maze without affecting initial acquisition. Strikingly, MK-801 treatment also significantly reduced the density of PV+ and CB+ neurons, and this effect persisted for 7 days after drug cessation at the dose of 0.2 mg/kg. We further demonstrated that the reduction in PV+ and CB+ neuron densities was ascribed to a downregulation of the expression levels of PV and CB, but not to neuronal death. These results parallel the behavioral and neuropathological changes of schizophrenia and provide evidence that adolescent NMDA receptors antagonism offers a useful tool for unraveling the etiology of the disease.

  17. Design of time-pulse coded optoelectronic neuronal elements for nonlinear transformation and integration

    Science.gov (United States)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.

    2008-03-01

    In the paper the actuality of neurophysiologically motivated neuron arrays with flexibly programmable functions and operations with possibility to select required accuracy and type of nonlinear transformation and learning are shown. We consider neurons design and simulation results of multichannel spatio-time algebraic accumulation - integration of optical signals. Advantages for nonlinear transformation and summation - integration are shown. The offered circuits are simple and can have intellectual properties such as learning and adaptation. The integrator-neuron is based on CMOS current mirrors and comparators. The performance: consumable power - 100...500 μW, signal period- 0.1...1ms, input optical signals power - 0.2...20 μW time delays - less 1μs, the number of optical signals - 2...10, integration time - 10...100 of signal periods, accuracy or integration error - about 1%. Various modifications of the neuron-integrators with improved performance and for different applications are considered in the paper.

  18. Neurons for hunger and thirst transmit a negative-valence teaching signal

    Science.gov (United States)

    Gong, Rong; Magnus, Christopher J.; Yu, Yang; Sternson, Scott M.

    2015-01-01

    Homeostasis is a biological principle for regulation of essential physiological parameters within a set range. Behavioural responses due to deviation from homeostasis are critical for survival, but motivational processes engaged by physiological need states are incompletely understood. We examined motivational characteristics and dynamics of two separate neuron populations that regulate energy and fluid homeostasis by using cell type-specific activity manipulations in mice. We found that starvation-sensitive AGRP neurons exhibit properties consistent with a negative-valence teaching signal. Mice avoided activation of AGRP neurons, indicating that AGRP neuron activity has negative valence. AGRP neuron inhibition conditioned preference for flavours and places. Correspondingly, deep-brain calcium imaging revealed that AGRP neuron activity rapidly reduced in response to food-related cues. Complementary experiments activating thirst-promoting neurons also conditioned avoidance. Therefore, these need-sensing neurons condition preference for environmental cues associated with nutrient or water ingestion, which is learned through reduction of negative-valence signals during restoration of homeostasis. PMID:25915020

  19. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan

    2015-04-01

    Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.

  20. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    International Nuclear Information System (INIS)

    De Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-01-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  1. Endoplasmic reticulum stress in wake-active neurons progresses with aging.

    Science.gov (United States)

    Naidoo, Nirinjini; Zhu, Jingxu; Zhu, Yan; Fenik, Polina; Lian, Jie; Galante, Ray; Veasey, Sigrid

    2011-08-01

    Fragmentation of wakefulness and sleep are expected outcomes of advanced aging. We hypothesize that wake neurons develop endoplasmic reticulum dyshomeostasis with aging, in parallel with impaired wakefulness. In this series of experiments, we sought to more fully characterize age-related changes in wakefulness and then, in relevant wake neuronal populations, explore functionality and endoplasmic reticulum homeostasis. We report that old mice show greater sleep/wake transitions in the active period with markedly shortened wake periods, shortened latencies to sleep, and less wake time in the subjective day in response to a novel social encounter. Consistent with sleep/wake instability and reduced social encounter wakefulness, orexinergic and noradrenergic wake neurons in aged mice show reduced c-fos response to wakefulness and endoplasmic reticulum dyshomeostasis with increased nuclear translocation of CHOP and GADD34. We have identified an age-related unfolded protein response injury to and dysfunction of wake neurons. It is anticipated that these changes contribute to sleep/wake fragmentation and cognitive impairment in aging. © 2011 The Authors. Aging Cell © 2011 Blackwell Publishing Ltd/Anatomical Society of Great Britain and Ireland.

  2. Fear conditioning leads to alteration in specific genes expression in cortical and thalamic neurons that project to the lateral amygdala.

    Science.gov (United States)

    Katz, Ira K; Lamprecht, Raphael

    2015-02-01

    RNA transcription is needed for memory formation. However, the ability to identify genes whose expression is altered by learning is greatly impaired because of methodological difficulties in profiling gene expression in specific neurons involved in memory formation. Here, we report a novel approach to monitor the expression of genes after learning in neurons in specific brain pathways needed for memory formation. In this study, we aimed to monitor gene expression after fear learning. We retrogradely labeled discrete thalamic neurons that project to the lateral amygdala (LA) of rats. The labeled neurons were dissected, using laser microdissection microscopy, after fear conditioning learning or unpaired training. The RNAs from the dissected neurons were subjected to microarray analysis. The levels of selected RNAs detected by the microarray analysis to be altered by fear conditioning were also assessed by nanostring analysis. We observed that the expression of genes involved in the regulation of translation, maturation and degradation of proteins was increased 6 h after fear conditioning compared to unpaired or naïve trained rats. These genes were not expressed 24 h after training or in cortical neurons that project to the LA. The expression of genes involved in transcription regulation and neuronal development was altered after fear conditioning learning in the cortical-LA pathway. The present study provides key information on the identity of genes expressed in discrete thalamic and cortical neurons that project to the LA after fear conditioning. Such an approach could also serve to identify gene products as targets for the development of a new generation of therapeutic agents that could be aimed to functionally identified brain circuits to treat memory-related disorders. © 2014 International Society for Neurochemistry.

  3. Functional imaging of stimulus convergence in amygdalar neurons during Pavlovian fear conditioning.

    Directory of Open Access Journals (Sweden)

    Sabiha K Barot

    2009-07-01

    Full Text Available Associative conditioning is a ubiquitous form of learning throughout the animal kingdom and fear conditioning is one of the most widely researched models for studying its neurobiological basis. Fear conditioning is also considered a model system for understanding phobias and anxiety disorders. A fundamental issue in fear conditioning regards the existence and location of neurons in the brain that receive convergent information about the conditioned stimulus (CS and unconditioned stimulus (US during the acquisition of conditioned fear memory. Convergent activation of neurons is generally viewed as a key event for fear learning, yet there has been almost no direct evidence of this critical event in the mammalian brain.Here, we used Arc cellular compartmental analysis of temporal gene transcription by fluorescence in situ hybridization (catFISH to identify neurons activated during single trial contextual fear conditioning in rats. To conform to temporal requirements of catFISH analysis we used a novel delayed contextual fear conditioning protocol which yields significant single- trial fear conditioning with temporal parameters amenable to catFISH analysis. Analysis yielded clear evidence that a population of BLA neurons receives convergent CS and US information at the time of the learning, that this only occurs when the CS-US arrangement is supportive of the learning, and that this process requires N-methyl-D-aspartate receptor activation. In contrast, CS-US convergence was not observed in dorsal hippocampus.Based on the pattern of Arc activation seen in conditioning and control groups, we propose that a key requirement for CS-US convergence onto BLA neurons is the potentiation of US responding by prior exposure to a novel CS. Our results also support the view that contextual fear memories are encoded in the amygdala and that the role of dorsal hippocampus is to process and transmit contextual CS information.

  4. Circadian and dark-pulse activation of orexin/hypocretin neurons

    Directory of Open Access Journals (Sweden)

    Marston Oliver J

    2008-12-01

    Full Text Available Temporal control of brain and behavioral states emerges as a consequence of the interaction between circadian and homeostatic neural circuits. This interaction permits the daily rhythm of sleep and wake, regulated in parallel by circadian cues originating from the suprachiasmatic nuclei (SCN and arousal-promoting signals arising from the orexin-containing neurons in the tuberal hypothalamus (TH. Intriguingly, the SCN circadian clock can be reset by arousal-promoting stimuli while activation of orexin/hypocretin neurons is believed to be under circadian control, suggesting the existence of a reciprocal relationship. Unfortunately, since orexin neurons are themselves activated by locomotor promoting cues, it is unclear how these two systems interact to regulate behavioral rhythms. Here mice were placed in conditions of constant light, which suppressed locomotor activity, but also revealed a highly pronounced circadian pattern in orexin neuronal activation. Significantly, activation of orexin neurons in the medial and lateral TH occurred prior to the onset of sustained wheel-running activity. Moreover, exposure to a 6 h dark pulse during the subjective day, a stimulus that promotes arousal and phase advances behavioral rhythms, activated neurons in the medial and lateral TH including those containing orexin. Concurrently, this stimulus suppressed SCN activity while activating cells in the median raphe. In contrast, dark pulse exposure during the subjective night did not reset SCN-controlled behavioral rhythms and caused a transient suppression of neuronal activation in the TH. Collectively these results demonstrate, for the first time, pronounced circadian control of orexin neuron activation and implicate recruitment of orexin cells in dark pulse resetting of the SCN circadian clock.

  5. Competitive STDP Learning of Overlapping Spatial Patterns.

    Science.gov (United States)

    Krunglevicius, Dalius

    2015-08-01

    Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly (i.e., patterns are mutually inclusive), however, competition would not preclude trained neuron's responding to a new pattern and adjusting synaptic weights accordingly. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor. This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron's forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.

  6. Characterization of energy and neurotransmitter metabolism in cortical glutamatergic neurons derived from human induced pluripotent stem cells: A novel approach to study metabolism in human neurons.

    Science.gov (United States)

    Aldana, Blanca I; Zhang, Yu; Lihme, Maria Fog; Bak, Lasse K; Nielsen, Jørgen E; Holst, Bjørn; Hyttel, Poul; Freude, Kristine K; Waagepetersen, Helle S

    2017-06-01

    Alterations in the cellular metabolic machinery of the brain are associated with neurodegenerative disorders such as Alzheimer's disease. Novel human cellular disease models are essential in order to study underlying disease mechanisms. In the present study, we characterized major metabolic pathways in neurons derived from human induced pluripotent stem cells (hiPSC). With this aim, cultures of hiPSC-derived neurons were incubated with [U- 13 C]glucose, [U- 13 C]glutamate or [U- 13 C]glutamine. Isotopic labeling in metabolites was determined using gas chromatography coupled to mass spectrometry, and cellular amino acid content was quantified by high-performance liquid chromatography. Additionally, we evaluated mitochondrial function using real-time assessment of oxygen consumption via the Seahorse XF e 96 Analyzer. Moreover, in order to validate the hiPSC-derived neurons as a model system, a metabolic profiling was performed in parallel in primary neuronal cultures of mouse cerebral cortex and cerebellum. These serve as well-established models of GABAergic and glutamatergic neurons, respectively. The hiPSC-derived neurons were previously characterized as being forebrain-specific cortical glutamatergic neurons. However, a comparable preparation of predominantly mouse cortical glutamatergic neurons is not available. We found a higher glycolytic capacity in hiPSC-derived neurons compared to mouse neurons and a substantial oxidative metabolism through the mitochondrial tricarboxylic acid (TCA) cycle. This finding is supported by the extracellular acidification and oxygen consumption rates measured in the cultured human neurons. [U- 13 C]Glutamate and [U- 13 C]glutamine were found to be efficient energy substrates for the neuronal cultures originating from both mice and humans. Interestingly, isotopic labeling in metabolites from [U- 13 C]glutamate was higher than that from [U- 13 C]glutamine. Although the metabolic profile of hiPSC-derived neurons in vitro was

  7. Human temporal cortical single neuron activity during working memory maintenance.

    Science.gov (United States)

    Zamora, Leona; Corina, David; Ojemann, George

    2016-06-01

    The Working Memory model of human memory, first introduced by Baddeley and Hitch (1974), has been one of the most influential psychological constructs in cognitive psychology and human neuroscience. However the neuronal correlates of core components of this model have yet to be fully elucidated. Here we present data from two studies where human temporal cortical single neuron activity was recorded during tasks differentially affecting the maintenance component of verbal working memory. In Study One we vary the presence or absence of distracting items for the entire period of memory storage. In Study Two we vary the duration of storage so that distractors filled all, or only one-third of the time the memory was stored. Extracellular single neuron recordings were obtained from 36 subjects undergoing awake temporal lobe resections for epilepsy, 25 in Study one, 11 in Study two. Recordings were obtained from a total of 166 lateral temporal cortex neurons during performance of one of these two tasks, 86 study one, 80 study two. Significant changes in activity with distractor manipulation were present in 74 of these neurons (45%), 38 Study one, 36 Study two. In 48 (65%) of those there was increased activity during the period when distracting items were absent, 26 Study One, 22 Study Two. The magnitude of this increase was greater for Study One, 47.6%, than Study Two, 8.1%, paralleling the reduction in memory errors in the absence of distracters, for Study One of 70.3%, Study Two 26.3% These findings establish that human lateral temporal cortex is part of the neural system for working memory, with activity during maintenance of that memory that parallels performance, suggesting it represents active rehearsal. In 31 of these neurons (65%) this activity was an extension of that during working memory encoding that differed significantly from the neural processes recorded during overt and silent language tasks without a recent memory component, 17 Study one, 14 Study two

  8. Human Temporal Cortical Single Neuron Activity During Working Memory Maintenance

    Science.gov (United States)

    Zamora, Leona; Corina, David; Ojemann, George

    2016-01-01

    The Working Memory model of human memory, first introduced by Baddeley and Hitch (1974), has been one of the most influential psychological constructs in cognitive psychology and human neuroscience. However the neuronal correlates of core components of this model have yet to be fully elucidated. Here we present data from two studies where human temporal cortical single neuron activity was recorded during tasks differentially affecting the maintenance component of verbal working memory. In Study One we vary the presence or absence of distracting items for the entire period of memory storage. In Study Two we vary the duration of storage so that distractors filled all, or only one-third of the time the memory was stored. Extracellular single neuron recordings were obtained from 36 subjects undergoing awake temporal lobe resections for epilepsy, 25 in Study one, 11 in Study two. Recordings were obtained from a total of 166 lateral temporal cortex neurons during performance of one of these two tasks, 86 study one, 80 study two. Significant changes in activity with distractor manipulation were present in 74 of these neurons (45%), 38 Study one, 36 Study two. In 48 (65%) of those there was increased activity during the period when distracting items were absent, 26 Study One, 22 Study Two. The magnitude of this increase was greater for Study One, 47.6%, than Study Two, 8.1%, paralleling the reduction in memory errors in the absence of distracters, for Study One of 70.3%, Study Two 26.3% These findings establish that human lateral temporal cortex is part of the neural system for working memory, with activity during maintenance of that memory that parallels performance, suggesting it represents active rehearsal. In 31 of these neurons (65%) this activity was an extension of that during working memory encoding that differed significantly from the neural processes recorded during overt and silent language tasks without a recent memory component, 17 Study one, 14 Study two

  9. CNF1 improves astrocytic ability to support neuronal growth and differentiation in vitro.

    Directory of Open Access Journals (Sweden)

    Fiorella Malchiodi-Albedi

    Full Text Available Modulation of cerebral Rho GTPases activity in mice brain by intracerebral administration of Cytotoxic Necrotizing Factor 1 (CNF1 leads to enhanced neurotransmission and synaptic plasticity and improves learning and memory. To gain more insight into the interactions between CNF1 and neuronal cells, we used primary neuronal and astrocytic cultures from rat embryonic brain to study CNF1 effects on neuronal differentiation, focusing on dendritic tree growth and synapse formation, which are strictly modulated by Rho GTPases. CNF1 profoundly remodeled the cytoskeleton of hippocampal and cortical neurons, which showed philopodia-like, actin-positive projections, thickened and poorly branched dendrites, and a decrease in synapse number. CNF1 removal, however, restored dendritic tree development and synapse formation, suggesting that the toxin can reversibly block neuronal differentiation. On differentiated neurons, CNF1 had a similar effacing effect on synapses. Therefore, a direct interaction with CNF1 is apparently deleterious for neurons. Since astrocytes play a pivotal role in neuronal differentiation and synaptic regulation, we wondered if the beneficial in vivo effect could be mediated by astrocytes. Primary astrocytes from embryonic cortex were treated with CNF1 for 48 hours and used as a substrate for growing hippocampal neurons. Such neurons showed an increased development of neurites, in respect to age-matched controls, with a wider dendritic tree and a richer content in synapses. In CNF1-exposed astrocytes, the production of interleukin 1β, known to reduce dendrite development and complexity in neuronal cultures, was decreased. These results demonstrate that astrocytes, under the influence of CNF1, increase their supporting activity on neuronal growth and differentiation, possibly related to the diminished levels of interleukin 1β. These observations suggest that the enhanced synaptic plasticity and improved learning and memory described

  10. CNF1 Improves Astrocytic Ability to Support Neuronal Growth and Differentiation In vitro

    Science.gov (United States)

    Malchiodi-Albedi, Fiorella; Paradisi, Silvia; Di Nottia, Michela; Simone, Daiana; Travaglione, Sara; Falzano, Loredana; Guidotti, Marco; Frank, Claudio; Cutarelli, Alessandro; Fabbri, Alessia; Fiorentini, Carla

    2012-01-01

    Modulation of cerebral Rho GTPases activity in mice brain by intracerebral administration of Cytotoxic Necrotizing Factor 1 (CNF1) leads to enhanced neurotransmission and synaptic plasticity and improves learning and memory. To gain more insight into the interactions between CNF1 and neuronal cells, we used primary neuronal and astrocytic cultures from rat embryonic brain to study CNF1 effects on neuronal differentiation, focusing on dendritic tree growth and synapse formation, which are strictly modulated by Rho GTPases. CNF1 profoundly remodeled the cytoskeleton of hippocampal and cortical neurons, which showed philopodia-like, actin-positive projections, thickened and poorly branched dendrites, and a decrease in synapse number. CNF1 removal, however, restored dendritic tree development and synapse formation, suggesting that the toxin can reversibly block neuronal differentiation. On differentiated neurons, CNF1 had a similar effacing effect on synapses. Therefore, a direct interaction with CNF1 is apparently deleterious for neurons. Since astrocytes play a pivotal role in neuronal differentiation and synaptic regulation, we wondered if the beneficial in vivo effect could be mediated by astrocytes. Primary astrocytes from embryonic cortex were treated with CNF1 for 48 hours and used as a substrate for growing hippocampal neurons. Such neurons showed an increased development of neurites, in respect to age-matched controls, with a wider dendritic tree and a richer content in synapses. In CNF1-exposed astrocytes, the production of interleukin 1β, known to reduce dendrite development and complexity in neuronal cultures, was decreased. These results demonstrate that astrocytes, under the influence of CNF1, increase their supporting activity on neuronal growth and differentiation, possibly related to the diminished levels of interleukin 1β. These observations suggest that the enhanced synaptic plasticity and improved learning and memory described in CNF1-injected

  11. Direct effects of endogenous pyrogen on medullary temperature-responsive neurons in rabbits.

    Science.gov (United States)

    Sakata, Y; Morimoto, A; Takase, Y; Murakami, N

    1981-01-01

    The effect of endogenous pyrogen (E.P.) injected directly into the tissue near the recording site were examined on the activities of the medullary temperature-responsive (TR) neurons in rabbits anesthetized with urethane. Endogenous pyrogen prepared from rabbit's whole blood was administered by a fine glass cannula (100-200 micrometer in diameter) in a fluid volume of 1 to 4 microliter. The cannula was fixed to the manipulator in parallel with a microelectrode and their tips were less than 0.05 mm apart. In rabbits with the intact preoptic/anterior hypothalamic (PO/AH) region, 4 warm-responsive neurons out of 7 were inhibited and 6 cold-responsive neuron out of 7 were excited by the direct administration of the E.P. In rabbits with lesions of the PO/AH, 5 warm-responsive neurons out of 9 were inhibited and 6 cold-responsive neurons out of 8 were facilitated by E.P. Antipyretics administered locally after the E.P. antagonized the pyretic effect, causing a return of the discharge of TR neuron to the control rate within 2.4 +/- 1.2 (mean +/- S.D.) min. The medullary TR neuron itself has the ability to respond to the E.P. and contributes to the development of fever.

  12. Cell-Specific Cholinergic Modulation of Excitability of Layer 5B Principal Neurons in Mouse Auditory Cortex

    Science.gov (United States)

    Joshi, Ankur; Kalappa, Bopanna I.; Anderson, Charles T.

    2016-01-01

    The neuromodulator acetylcholine (ACh) is crucial for several cognitive functions, such as perception, attention, and learning and memory. Whereas, in most cases, the cellular circuits or the specific neurons via which ACh exerts its cognitive effects remain unknown, it is known that auditory cortex (AC) neurons projecting from layer 5B (L5B) to the inferior colliculus, corticocollicular neurons, are required for cholinergic-mediated relearning of sound localization after occlusion of one ear. Therefore, elucidation of the effects of ACh on the excitability of corticocollicular neurons will bridge the cell-specific and cognitive properties of ACh. Because AC L5B contains another class of neurons that project to the contralateral cortex, corticocallosal neurons, to identify the cell-specific mechanisms that enable corticocollicular neurons to participate in sound localization relearning, we investigated the effects of ACh release on both L5B corticocallosal and corticocollicular neurons. Using in vitro electrophysiology and optogenetics in mouse brain slices, we found that ACh generated nicotinic ACh receptor (nAChR)-mediated depolarizing potentials and muscarinic ACh receptor (mAChR)-mediated hyperpolarizing potentials in AC L5B corticocallosal neurons. In corticocollicular neurons, ACh release also generated nAChR-mediated depolarizing potentials. However, in contrast to the mAChR-mediated hyperpolarizing potentials in corticocallosal neurons, ACh generated prolonged mAChR-mediated depolarizing potentials in corticocollicular neurons. These prolonged depolarizing potentials generated persistent firing in corticocollicular neurons, whereas corticocallosal neurons lacking mAChR-mediated depolarizing potentials did not show persistent firing. We propose that ACh-mediated persistent firing in corticocollicular neurons may represent a critical mechanism required for learning-induced plasticity in AC. SIGNIFICANCE STATEMENT Acetylcholine (ACh) is crucial for cognitive

  13. Self-other relations in social development and autism: multiple roles for mirror neurons and other brain bases.

    Science.gov (United States)

    Williams, Justin H G

    2008-04-01

    Mirror neuron system dysfunction may underlie a self-other matching impairment, which has previously been suggested to account for autism. Embodied Cognition Theory, which proposes that action provides a foundation for cognition has lent further credence to these ideas. The hypotheses of a self-other matching deficit and impaired mirror neuron function in autism have now been well supported by studies employing a range of methodologies. However, underlying mechanisms require further exploration to explain how mirror neurons may be involved in attentional and mentalizing processes. Impairments in self-other matching and mirror neuron function are not necessarily inextricably linked and it seems possible that different sub-populations of mirror neurons, located in several regions, contribute differentially to social cognitive functions. It is hypothesized that mirror neuron coding for action-direction may be required for developing attentional sensitivity to self-directed actions, and consequently for person-oriented, stimulus-driven attention. Mirror neuron networks may vary for different types of social learning such as "automatic" imitation and imitation learning. Imitation learning may be more reliant on self-other comparison processes (based on mirror neurons) that identify differences as well as similarities between actions. Differential connectivity with the amygdala-orbitofrontal system may also be important. This could have implications for developing "theory of mind," with intentional self-other comparison being relevant to meta-representational abilities, and "automatic" imitation being more relevant to empathy. While it seems clear that autism is associated with impaired development of embodied aspects of cognition, the ways that mirror neurons contribute to these brain-behavior links are likely to be complex.

  14. [The mirror neuron system in motor and sensory rehabilitation].

    Science.gov (United States)

    Oouchida, Yutaka; Izumi, Shinichi

    2014-06-01

    The discovery of the mirror neuron system has dramatically changed the study of motor control in neuroscience. The mirror neuron system provides a conceptual framework covering the aspects of motor as well as sensory functions in motor control. Previous studies of motor control can be classified as studies of motor or sensory functions, and these two classes of studies appear to have advanced independently. In rehabilitation requiring motor learning, such as relearning movement after limb paresis, however, sensory information of feedback for motor output as well as motor command are essential. During rehabilitation from chronic pain, motor exercise is one of the most effective treatments for pain caused by dysfunction in the sensory system. In rehabilitation where total intervention unifying the motor and sensory aspects of motor control is important, learning through imitation, which is associated with the mirror neuron system can be effective and suitable. In this paper, we introduce the clinical applications of imitated movement in rehabilitation from motor impairment after brain damage and phantom limb pain after limb amputation.

  15. Growth of large patterned arrays of neurons using plasma methods

    International Nuclear Information System (INIS)

    Brown, I G; Bjornstad, K A; Blakely, E A; Galvin, J E; Monteiro, O R; Sangyuenyongpipat, S

    2003-01-01

    To understand how large systems of neurons communicate, we need to develop, among other things, methods for growing patterned networks of large numbers of neurons. Success with this challenge will be important to our understanding of how the brain works, as well as to the development of novel kinds of computer architecture that may parallel the organization of the brain. We have investigated the use of metal ion implantation using a vacuum-arc ion source, and plasma deposition with a filtered vacuum-arc system, as a means of forming regions of selective neuronal attachment on surfaces. Lithographic patterns created by the treating surface with ion species that enhance or inhibit neuronal cell attachment allow subsequent proliferation and/or differentiation of the neurons to form desired patterned neural arrays. In the work described here, we used glass microscope slides as substrates, and some of the experiments made use of simple masks to form patterns of ion beam or plasma deposition treated regions. PC-12 rat neurons were then cultured on the treated substrates coated with Type I Collagen, and the growth and differentiation was monitored. Particularly good selective growth was obtained using plasma deposition of diamond-like carbon films of about one hundred Angstroms thickness. Neuron proliferation and the elaboration of dendrites and axons after the addition of nerve growth factor both showed excellent contrast, with prolific growth and differentiation on the treated surfaces and very low growth on the untreated surfaces

  16. Growth of large patterned arrays of neurons using plasma methods

    Energy Technology Data Exchange (ETDEWEB)

    Brown, I G; Bjornstad, K A; Blakely, E A; Galvin, J E; Monteiro, O R; Sangyuenyongpipat, S [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2003-05-01

    To understand how large systems of neurons communicate, we need to develop, among other things, methods for growing patterned networks of large numbers of neurons. Success with this challenge will be important to our understanding of how the brain works, as well as to the development of novel kinds of computer architecture that may parallel the organization of the brain. We have investigated the use of metal ion implantation using a vacuum-arc ion source, and plasma deposition with a filtered vacuum-arc system, as a means of forming regions of selective neuronal attachment on surfaces. Lithographic patterns created by the treating surface with ion species that enhance or inhibit neuronal cell attachment allow subsequent proliferation and/or differentiation of the neurons to form desired patterned neural arrays. In the work described here, we used glass microscope slides as substrates, and some of the experiments made use of simple masks to form patterns of ion beam or plasma deposition treated regions. PC-12 rat neurons were then cultured on the treated substrates coated with Type I Collagen, and the growth and differentiation was monitored. Particularly good selective growth was obtained using plasma deposition of diamond-like carbon films of about one hundred Angstroms thickness. Neuron proliferation and the elaboration of dendrites and axons after the addition of nerve growth factor both showed excellent contrast, with prolific growth and differentiation on the treated surfaces and very low growth on the untreated surfaces.

  17. Mechanisms underlying the social enhancement of vocal learning in songbirds.

    Science.gov (United States)

    Chen, Yining; Matheson, Laura E; Sakata, Jon T

    2016-06-14

    Social processes profoundly influence speech and language acquisition. Despite the importance of social influences, little is known about how social interactions modulate vocal learning. Like humans, songbirds learn their vocalizations during development, and they provide an excellent opportunity to reveal mechanisms of social influences on vocal learning. Using yoked experimental designs, we demonstrate that social interactions with adult tutors for as little as 1 d significantly enhanced vocal learning. Social influences on attention to song seemed central to the social enhancement of learning because socially tutored birds were more attentive to the tutor's songs than passively tutored birds, and because variation in attentiveness and in the social modulation of attention significantly predicted variation in vocal learning. Attention to song was influenced by both the nature and amount of tutor song: Pupils paid more attention to songs that tutors directed at them and to tutors that produced fewer songs. Tutors altered their song structure when directing songs at pupils in a manner that resembled how humans alter their vocalizations when speaking to infants, that was distinct from how tutors changed their songs when singing to females, and that could influence attention and learning. Furthermore, social interactions that rapidly enhanced learning increased the activity of noradrenergic and dopaminergic midbrain neurons. These data highlight striking parallels between humans and songbirds in the social modulation of vocal learning and suggest that social influences on attention and midbrain circuitry could represent shared mechanisms underlying the social modulation of vocal learning.

  18. Controlling Chaos in Neuron Based on Lasalle Invariance Principle

    International Nuclear Information System (INIS)

    Wei Duqu; Qin Yinghua

    2011-01-01

    A new control law is proposed to asymptotically stabilize the chaotic neuron system based on LaSalle invariant principle. The control technique does not require analytical knowledge of the system dynamics and operates without an explicit knowledge of the desired steady-state position. The well-known modified Hodgkin-Huxley (MHH) and Hindmarsh-Rose (HR) model neurons are taken as examples to verify the implementation of our method. Simulation results show the proposed control law is effective. The outcome of this study is significant since it is helpful to understand the learning process of a human brain towards the information processing, memory and abnormal discharge of the brain neurons. (general)

  19. Learning and structure of neuronal networks

    Indian Academy of Sciences (India)

    We study the effect of learning dynamics on network topology. Firstly, a network of discrete dynamical systems is considered for this purpose and the coupling strengths are made to evolve according to a temporal learning rule that is based on the paradigm of spike-time-dependent plasticity (STDP). This incorporates ...

  20. Piriform cortical glutamatergic and GABAergic neurons express coordinated plasticity for whisker-induced odor recall.

    Science.gov (United States)

    Liu, Yahui; Gao, Zilong; Chen, Changfeng; Wen, Bo; Huang, Li; Ge, Rongjing; Zhao, Shidi; Fan, Ruichen; Feng, Jing; Lu, Wei; Wang, Liping; Wang, Jin-Hui

    2017-11-10

    Neural plasticity occurs in learning and memory. Coordinated plasticity at glutamatergic and GABAergic neurons during memory formation remains elusive, which we investigate in a mouse model of associative learning by cellular imaging and electrophysiology. Paired odor and whisker stimulations lead to whisker-induced olfaction response. In mice that express this cross-modal memory, the neurons in the piriform cortex are recruited to encode newly acquired whisker signal alongside innate odor signal, and their response patterns to these associated signals are different. There are emerged synaptic innervations from barrel cortical neurons to piriform cortical neurons from these mice. These results indicate the recruitment of associative memory cells in the piriform cortex after associative memory. In terms of the structural and functional plasticity at these associative memory cells in the piriform cortex, glutamatergic neurons and synapses are upregulated, GABAergic neurons and synapses are downregulated as well as their mutual innervations are refined in the coordinated manner. Therefore, the associated activations of sensory cortices triggered by their input signals induce the formation of their mutual synapse innervations, the recruitment of associative memory cells and the coordinated plasticity between the GABAergic and glutamatergic neurons, which work for associative memory cells to encode cross-modal associated signals in their integration, associative storage and distinguishable retrieval.

  1. Lactate promotes plasticity gene expression by potentiating NMDA signaling in neurons

    KAUST Repository

    Yang, Jiangyan; Ruchti, Evelyne; Petit, Jean Marie; Jourdain, Pascal; Grenningloh, Gabriele; Allaman, Igor; Magistretti, Pierre J.

    2014-01-01

    L-lactate is a product of aerobic glycolysis that can be used by neurons as an energy substrate. Here we report that in neurons L-lactate stimulates the expression of synaptic plasticity-related genes such as Arc, c-Fos, and Zif268 through a mechanism involving NMDA receptor activity and its downstream signaling cascade Erk1/2. L-lactate potentiates NMDA receptor-mediated currents and the ensuing increase in intracellular calcium. In parallel to this, L-lactate increases intracellular levels of NADH, thereby modulating the redox state of neurons. NADH mimics all of the effects of L-lactate on NMDA signaling, pointing to NADH increase as a primary mediator of L-lactate effects. The induction of plasticity genes is observed both in mouse primary neurons in culture and in vivo in the mouse sensory-motor cortex. These results provide insights for the understanding of the molecular mechanisms underlying the critical role of astrocyte-derived L-lactate in long-term memory and long-term potentiation in vivo. This set of data reveals a previously unidentified action of L-lactate as a signaling molecule for neuronal plasticity.

  2. Lactate promotes plasticity gene expression by potentiating NMDA signaling in neurons

    KAUST Repository

    Yang, Jiangyan

    2014-07-28

    L-lactate is a product of aerobic glycolysis that can be used by neurons as an energy substrate. Here we report that in neurons L-lactate stimulates the expression of synaptic plasticity-related genes such as Arc, c-Fos, and Zif268 through a mechanism involving NMDA receptor activity and its downstream signaling cascade Erk1/2. L-lactate potentiates NMDA receptor-mediated currents and the ensuing increase in intracellular calcium. In parallel to this, L-lactate increases intracellular levels of NADH, thereby modulating the redox state of neurons. NADH mimics all of the effects of L-lactate on NMDA signaling, pointing to NADH increase as a primary mediator of L-lactate effects. The induction of plasticity genes is observed both in mouse primary neurons in culture and in vivo in the mouse sensory-motor cortex. These results provide insights for the understanding of the molecular mechanisms underlying the critical role of astrocyte-derived L-lactate in long-term memory and long-term potentiation in vivo. This set of data reveals a previously unidentified action of L-lactate as a signaling molecule for neuronal plasticity.

  3. Machine Learning Analysis Identifies Drosophila Grunge/Atrophin as an Important Learning and Memory Gene Required for Memory Retention and Social Learning.

    Science.gov (United States)

    Kacsoh, Balint Z; Greene, Casey S; Bosco, Giovanni

    2017-11-06

    High-throughput experiments are becoming increasingly common, and scientists must balance hypothesis-driven experiments with genome-wide data acquisition. We sought to predict novel genes involved in Drosophila learning and long-term memory from existing public high-throughput data. We performed an analysis using PILGRM, which analyzes public gene expression compendia using machine learning. We evaluated the top prediction alongside genes involved in learning and memory in IMP, an interface for functional relationship networks. We identified Grunge/Atrophin ( Gug/Atro ), a transcriptional repressor, histone deacetylase, as our top candidate. We find, through multiple, distinct assays, that Gug has an active role as a modulator of memory retention in the fly and its function is required in the adult mushroom body. Depletion of Gug specifically in neurons of the adult mushroom body, after cell division and neuronal development is complete, suggests that Gug function is important for memory retention through regulation of neuronal activity, and not by altering neurodevelopment. Our study provides a previously uncharacterized role for Gug as a possible regulator of neuronal plasticity at the interface of memory retention and memory extinction. Copyright © 2017 Kacsoh et al.

  4. Spatial patterns of FUS-immunoreactive neuronal cytoplasmic inclusions (NCI) in neuronal intermediate filament inclusion disease (NIFID).

    Science.gov (United States)

    Armstrong, Richard A; Gearing, Marla; Bigio, Eileen H; Cruz-Sanchez, Felix F; Duyckaerts, Charles; Mackenzie, Ian R A; Perry, Robert H; Skullerud, Kari; Yokoo, Hideaki; Cairns, Nigel J

    2011-11-01

    Neuronal intermediate filament inclusion disease (NIFID), a rare form of frontotemporal lobar degeneration (FTLD), is characterized neuropathologically by focal atrophy of the frontal and temporal lobes, neuronal loss, gliosis, and neuronal cytoplasmic inclusions (NCI) containing epitopes of ubiquitin and neuronal intermediate filament (IF) proteins. Recently, the 'fused in sarcoma' (FUS) protein (encoded by the FUS gene) has been shown to be a component of the inclusions of NIFID. To further characterize FUS proteinopathy in NIFID, we studied the spatial patterns of the FUS-immunoreactive NCI in frontal and temporal cortex of 10 cases. In the cerebral cortex, sectors CA1/2 of the hippocampus, and the dentate gyrus (DG), the FUS-immunoreactive NCI were frequently clustered and the clusters were regularly distributed parallel to the tissue boundary. In a proportion of cortical gyri, cluster size of the NCI approximated to those of the columns of cells was associated with the cortico-cortical projections. There were no significant differences in the frequency of different types of spatial patterns with disease duration or disease stage. Clusters of NCI in the upper and lower cortex were significantly larger using FUS compared with phosphorylated, neurofilament heavy polypeptide (NEFH) or α-internexin (INA) immunohistochemistry (IHC). We concluded: (1) FUS-immunoreactive NCI exhibit similar spatial patterns to analogous inclusions in the tauopathies and synucleinopathies, (2) clusters of FUS-immunoreactive NCI are larger than those revealed by NEFH or ΙΝΑ, and (3) the spatial patterns of the FUS-immunoreactive NCI suggest the degeneration of the cortico-cortical projections in NIFID.

  5. Glutamate transporter activity promotes enhanced Na+/K+-ATPase -mediated extracellular K+ management during neuronal activity

    DEFF Research Database (Denmark)

    Larsen, Brian R; Holm, Rikke; Vilsen, Bente

    2016-01-01

    , in addition, Na+ /K+ -ATPase-mediated K+ clearance could be governed by astrocytic [Na+ ]i . During most neuronal activity, glutamate is released in the synaptic cleft and is re-absorbed by astrocytic Na+ -coupled glutamate transporters, thereby elevating [Na+ ]i . It thus remains unresolved whether...... the different Na+ /K+ -ATPase isoforms are controlled by [K+ ]o or [Na+ ]i during neuronal activity. Hippocampal slice recordings of stimulus-induced [K+ ]o transients with ion-sensitive microelectrodes revealed reduced Na+ /K+ -ATPase-mediated K+ management upon parallel inhibition of the glutamate transporter......+ affinity to the α1 and α2 isoforms than the β2 isoform. In summary, enhanced astrocytic Na+ /K+ -ATPase-dependent K+ clearance was obtained with parallel glutamate transport activity. The astrocytic Na+ /K+ -ATPase isoform constellation α2β1 appeared to be specifically geared to respond to the [Na+ ]i...

  6. Computer model of a reverberant and parallel circuit coupling

    Science.gov (United States)

    Kalil, Camila de Andrade; de Castro, Maria Clícia Stelling; Cortez, Célia Martins

    2017-11-01

    The objective of the present study was to deepen the knowledge about the functioning of the neural circuits by implementing a signal transmission model using the Graph Theory in a small network of neurons composed of an interconnected reverberant and parallel circuit, in order to investigate the processing of the signals in each of them and the effects on the output of the network. For this, a program was developed in C language and simulations were done using neurophysiological data obtained in the literature.

  7. Neural mechanisms of vocal imitation: The role of sleep replay in shaping mirror neurons.

    Science.gov (United States)

    Giret, Nicolas; Edeline, Jean-Marc; Del Negro, Catherine

    2017-06-01

    Learning by imitation involves not only perceiving another individual's action to copy it, but also the formation of a memory trace in order to gradually establish a correspondence between the sensory and motor codes, which represent this action through sensorimotor experience. Memory and sensorimotor processes are closely intertwined. Mirror neurons, which fire both when the same action is performed or perceived, have received considerable attention in the context of imitation. An influential view of memory processes considers that the consolidation of newly acquired information or skills involves an active offline reprocessing of memories during sleep within the neuronal networks that were initially used for encoding. Here, we review the recent advances in the field of mirror neurons and offline processes in the songbird. We further propose a theoretical framework that could establish the neurobiological foundations of sensorimotor learning by imitation. We propose that the reactivation of neuronal assemblies during offline periods contributes to the integration of sensory feedback information and the establishment of sensorimotor mirroring activity at the neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. CNF1 Improves Astrocytic Ability to Support Neuronal Growth and Differentiation In vitro

    OpenAIRE

    Malchiodi-Albedi, Fiorella; Paradisi, Silvia; Di Nottia, Michela; Simone, Daiana; Travaglione, Sara; Falzano, Loredana; Guidotti, Marco; Frank, Claudio; Cutarelli, Alessandro; Fabbri, Alessia; Fiorentini, Carla

    2012-01-01

    Modulation of cerebral Rho GTPases activity in mice brain by intracerebral administration of Cytotoxic Necrotizing Factor 1 (CNF1) leads to enhanced neurotransmission and synaptic plasticity and improves learning and memory. To gain more insight into the interactions between CNF1 and neuronal cells, we used primary neuronal and astrocytic cultures from rat embryonic brain to study CNF1 effects on neuronal differentiation, focusing on dendritic tree growth and synapse formation, which are stri...

  9. Roles of aminergic neurons in formation and recall of associative memory in crickets

    Directory of Open Access Journals (Sweden)

    Makoto Mizunami

    2010-11-01

    Full Text Available We review recent progress in the study of roles of octopaminergic (OA-ergic and dopaminergic (DA-ergic signaling in insect classical conditioning, focusing on our studies on crickets. Studies on olfactory learning in honey bees and fruit-flies have suggested that OA-ergic and DA-ergic neurons convey reinforcing signals of appetitive unconditioned stimulus (US and aversive US, respectively. Our work suggested that this is applicable to olfactory, visual pattern and color learning in crickets, indicating that this feature is ubiquitous in learning of various sensory stimuli. We also showed that aversive memory decayed much faster than did appetitive memory, and we proposed that this feature is common in insects and humans. Our study also suggested that activation of OA- or DA-ergic neurons is needed for appetitive or aversive memory recall, respectively. To account for this finding, we proposed a model in which it is assumed that two types of synaptic connections are strengthened by conditioning and are activated during memory recall, one type being connections from neurons representing conditioned stimulus (CS to neurons inducing conditioned response and the other being connections from neurons representing CS to OA- or DA-ergic neurons representing appetitive or aversive US, respectively. The former is called stimulus-response (S-R connection and the latter is called stimulus-stimulus (S-S connection by theorists studying classical conditioning in vertebrates. Results of our studies using a second-order conditioning procedure supported our model. We propose that insect classical conditioning involves the formation of S-S connection and its activation for memory recall, which are often called cognitive processes.

  10. Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.

    Science.gov (United States)

    Gangopadhyay, Ahana; Chakrabartty, Shantanu

    2017-04-27

    This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of ΣΔ modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.

  11. Transcriptional profiling at whole population and single cell levels reveals somatosensory neuron molecular diversity

    Science.gov (United States)

    Chiu, Isaac M; Barrett, Lee B; Williams, Erika K; Strochlic, David E; Lee, Seungkyu; Weyer, Andy D; Lou, Shan; Bryman, Gregory S; Roberson, David P; Ghasemlou, Nader; Piccoli, Cara; Ahat, Ezgi; Wang, Victor; Cobos, Enrique J; Stucky, Cheryl L; Ma, Qiufu; Liberles, Stephen D; Woolf, Clifford J

    2014-01-01

    The somatosensory nervous system is critical for the organism's ability to respond to mechanical, thermal, and nociceptive stimuli. Somatosensory neurons are functionally and anatomically diverse but their molecular profiles are not well-defined. Here, we used transcriptional profiling to analyze the detailed molecular signatures of dorsal root ganglion (DRG) sensory neurons. We used two mouse reporter lines and surface IB4 labeling to purify three major non-overlapping classes of neurons: 1) IB4+SNS-Cre/TdTomato+, 2) IB4−SNS-Cre/TdTomato+, and 3) Parv-Cre/TdTomato+ cells, encompassing the majority of nociceptive, pruriceptive, and proprioceptive neurons. These neurons displayed distinct expression patterns of ion channels, transcription factors, and GPCRs. Highly parallel qRT-PCR analysis of 334 single neurons selected by membership of the three populations demonstrated further diversity, with unbiased clustering analysis identifying six distinct subgroups. These data significantly increase our knowledge of the molecular identities of known DRG populations and uncover potentially novel subsets, revealing the complexity and diversity of those neurons underlying somatosensation. DOI: http://dx.doi.org/10.7554/eLife.04660.001 PMID:25525749

  12. How attention can create synaptic tags for the learning of working memories in sequential tasks

    OpenAIRE

    Rombouts, Jaldert O; Bohte, Sander M; Roelfsema, Pieter R

    2015-01-01

    htmlabstractIntelligence is our ability to learn appropriate responses to new stimuli and situations. Neurons in association cortex are thought to be essential for this ability. During learning these neurons become tuned to relevant features and start to represent them with persistent activity during memory delays. This learning process is not well understood. Here we develop a biologically plausible learning scheme that explains how trial-and-error learning induces neuronal selectivity and w...

  13. Astrocytic actions on extrasynaptic neuronal currents

    Directory of Open Access Journals (Sweden)

    Balazs ePal

    2015-12-01

    Full Text Available In the last few decades, knowledge about astrocytic functions has significantly increased. It was demonstrated that astrocytes are not passive elements of the central nervous system, but active partners of neurons. There is a growing body of knowledge about the calcium excitability of astrocytes, the actions of different gliotransmitters and their release mechanisms, as well as the participation of astrocytes in the regulation of synaptic functions and their contribution to synaptic plasticity. However, astrocytic functions are even more complex than being a partner of the 'tripartite synapse', as they can influence extrasynaptic neuronal currents either by releasing substances or regulating ambient neurotransmitter levels. Several types of currents or changes of membrane potential with different kinetics and via different mechanisms can be elicited by astrocytic activity. Astrocyte-dependent phasic or tonic, inward or outward currents were described in several brain areas. Such currents, together with the synaptic actions of astrocytes, can contribute to neuromodulatory mechanisms, neurosensory and –secretory processes, cortical oscillatory activity, memory and learning or overall neuronal excitability. This mini-review is an attempt to give a brief summary of astrocyte-dependent extrasynaptic neuronal currents and their possible functional significance.

  14. Cooperation-Controlled Learning for Explicit Class Structure in Self-Organizing Maps

    Science.gov (United States)

    Kamimura, Ryotaro

    2014-01-01

    We attempt to demonstrate the effectiveness of multiple points of view toward neural networks. By restricting ourselves to two points of view of a neuron, we propose a new type of information-theoretic method called “cooperation-controlled learning.” In this method, individual and collective neurons are distinguished from one another, and we suppose that the characteristics of individual and collective neurons are different. To implement individual and collective neurons, we prepare two networks, namely, cooperative and uncooperative networks. The roles of these networks and the roles of individual and collective neurons are controlled by the cooperation parameter. As the parameter is increased, the role of cooperative networks becomes more important in learning, and the characteristics of collective neurons become more dominant. On the other hand, when the parameter is small, individual neurons play a more important role. We applied the method to the automobile and housing data from the machine learning database and examined whether explicit class boundaries could be obtained. Experimental results showed that cooperation-controlled learning, in particular taking into account information on input units, could be used to produce clearer class structure than conventional self-organizing maps. PMID:25309950

  15. Cooperation-Controlled Learning for Explicit Class Structure in Self-Organizing Maps

    Directory of Open Access Journals (Sweden)

    Ryotaro Kamimura

    2014-01-01

    Full Text Available We attempt to demonstrate the effectiveness of multiple points of view toward neural networks. By restricting ourselves to two points of view of a neuron, we propose a new type of information-theoretic method called “cooperation-controlled learning.” In this method, individual and collective neurons are distinguished from one another, and we suppose that the characteristics of individual and collective neurons are different. To implement individual and collective neurons, we prepare two networks, namely, cooperative and uncooperative networks. The roles of these networks and the roles of individual and collective neurons are controlled by the cooperation parameter. As the parameter is increased, the role of cooperative networks becomes more important in learning, and the characteristics of collective neurons become more dominant. On the other hand, when the parameter is small, individual neurons play a more important role. We applied the method to the automobile and housing data from the machine learning database and examined whether explicit class boundaries could be obtained. Experimental results showed that cooperation-controlled learning, in particular taking into account information on input units, could be used to produce clearer class structure than conventional self-organizing maps.

  16. Learning-related human brain activations reflecting individual finances.

    Science.gov (United States)

    Tobler, Philippe N; Fletcher, Paul C; Bullmore, Edward T; Schultz, Wolfram

    2007-04-05

    A basic tenet of microeconomics suggests that the subjective value of financial gains decreases with increasing assets of individuals ("marginal utility"). Using concepts from learning theory and microeconomics, we assessed the capacity of financial rewards to elicit behavioral and neuronal changes during reward-predictive learning in participants with different financial backgrounds. Behavioral learning speed during both acquisition and extinction correlated negatively with the assets of the participants, irrespective of education and age. Correspondingly, response changes in midbrain and striatum measured with functional magnetic resonance imaging were slower during both acquisition and extinction with increasing assets and income of the participants. By contrast, asymptotic magnitudes of behavioral and neuronal responses after learning were unrelated to personal finances. The inverse relationship of behavioral and neuronal learning speed with personal finances is compatible with the general concept of decreasing marginal utility with increasing wealth.

  17. Two fast and accurate heuristic RBF learning rules for data classification.

    Science.gov (United States)

    Rouhani, Modjtaba; Javan, Dawood S

    2016-03-01

    This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Role of CB1 cannabinoid receptors on GABAergic neurons in brain aging.

    Science.gov (United States)

    Albayram, Onder; Alferink, Judith; Pitsch, Julika; Piyanova, Anastasia; Neitzert, Kim; Poppensieker, Karola; Mauer, Daniela; Michel, Kerstin; Legler, Anne; Becker, Albert; Monory, Krisztina; Lutz, Beat; Zimmer, Andreas; Bilkei-Gorzo, Andras

    2011-07-05

    Brain aging is associated with cognitive decline that is accompanied by progressive neuroinflammatory changes. The endocannabinoid system (ECS) is involved in the regulation of glial activity and influences the progression of age-related learning and memory deficits. Mice lacking the Cnr1 gene (Cnr1(-/-)), which encodes the cannabinoid receptor 1 (CB1), showed an accelerated age-dependent deficit in spatial learning accompanied by a loss of principal neurons in the hippocampus. The age-dependent decrease in neuronal numbers in Cnr1(-/-) mice was not related to decreased neurogenesis or to epileptic seizures. However, enhanced neuroinflammation characterized by an increased density of astrocytes and activated microglia as well as an enhanced expression of the inflammatory cytokine IL-6 during aging was present in the hippocampus of Cnr1(-/-) mice. The ongoing process of pyramidal cell degeneration and neuroinflammation can exacerbate each other and both contribute to the cognitive deficits. Deletion of CB1 receptors from the forebrain GABAergic, but not from the glutamatergic neurons, led to a similar neuronal loss and increased neuroinflammation in the hippocampus as observed in animals lacking CB1 receptors in all cells. Our results suggest that CB1 receptor activity on hippocampal GABAergic neurons protects against age-dependent cognitive decline by reducing pyramidal cell degeneration and neuroinflammation.

  19. Sleep Dependent Synaptic Down-Selection (II: Single Neuron Level Benefits for Matching, Selectivity, and Specificity

    Directory of Open Access Journals (Sweden)

    Atif eHashmi

    2013-10-01

    Full Text Available In a companion paper (Nere et al., this volume, we used computer simulations to show that a strategy of activity-dependent, on-line net synaptic potentiation during wake, followed by off-line synaptic depression during sleep, can provide a parsimonious account for several memory benefits of sleep at the systems level, including the consolidation of procedural and declarative memories, gist extraction, and integration of new with old memories. In this paper, we consider the theoretical benefits of this two-step process at the single neuron level and employ the theoretical notion of Matching between brain and environment to measure how this process increases the ability of the neuron to capture regularities in the environment and model them internally. We show that down-selection during sleep is beneficial for increasing or restoring Matching after learning, after integrating new with old memories, and after forgetting irrelevant material. By contrast, alternative schemes, such as additional potentiation in wake, potentiation in sleep, or synaptic renormalization in wake, decrease Matching. We also argue that, by selecting appropriate loops through the brain that tie feedforward synapses with feedback ones in the same dendritic domain, different subsets of neurons can learn to specialize for different contingencies and form sequences of nested perception-action loops. By potentiating such loops when interacting with the environment in wake, and depressing them when disconnected from the environment in sleep, neurons can learn to match the long-term statistical structure of the environment while avoiding spurious modes of functioning and catastrophic interference. Finally, such a two-step process has the additional benefit of desaturating the neuron's ability to learn and of maintaining cellular homeostasis. Thus, sleep-dependent synaptic renormalization offers a parsimonious account for both cellular and systems-level effects of sleep on learning

  20. Is the Langevin phase equation an efficient model for oscillating neurons?

    Science.gov (United States)

    Ota, Keisuke; Tsunoda, Takamasa; Omori, Toshiaki; Watanabe, Shigeo; Miyakawa, Hiroyoshi; Okada, Masato; Aonishi, Toru

    2009-12-01

    The Langevin phase model is an important canonical model for capturing coherent oscillations of neural populations. However, little attention has been given to verifying its applicability. In this paper, we demonstrate that the Langevin phase equation is an efficient model for neural oscillators by using the machine learning method in two steps: (a) Learning of the Langevin phase model. We estimated the parameters of the Langevin phase equation, i.e., a phase response curve and the intensity of white noise from physiological data measured in the hippocampal CA1 pyramidal neurons. (b) Test of the estimated model. We verified whether a Fokker-Planck equation derived from the Langevin phase equation with the estimated parameters could capture the stochastic oscillatory behavior of the same neurons disturbed by periodic perturbations. The estimated model could predict the neural behavior, so we can say that the Langevin phase equation is an efficient model for oscillating neurons.

  1. Is the Langevin phase equation an efficient model for oscillating neurons?

    International Nuclear Information System (INIS)

    Ota, Keisuke; Tsunoda, Takamasa; Aonishi, Toru; Omori, Toshiaki; Okada, Masato; Watanabe, Shigeo; Miyakawa, Hiroyoshi

    2009-01-01

    The Langevin phase model is an important canonical model for capturing coherent oscillations of neural populations. However, little attention has been given to verifying its applicability. In this paper, we demonstrate that the Langevin phase equation is an efficient model for neural oscillators by using the machine learning method in two steps: (a) Learning of the Langevin phase model. We estimated the parameters of the Langevin phase equation, i.e., a phase response curve and the intensity of white noise from physiological data measured in the hippocampal CA1 pyramidal neurons. (b) Test of the estimated model. We verified whether a Fokker-Planck equation derived from the Langevin phase equation with the estimated parameters could capture the stochastic oscillatory behavior of the same neurons disturbed by periodic perturbations. The estimated model could predict the neural behavior, so we can say that the Langevin phase equation is an efficient model for oscillating neurons.

  2. Projection specificity in heterogeneous locus coeruleus cell populations: implications for learning and memory

    Science.gov (United States)

    Uematsu, Akira; Tan, Bao Zhen

    2015-01-01

    Noradrenergic neurons in the locus coeruleus (LC) play a critical role in many functions including learning and memory. This relatively small population of cells sends widespread projections throughout the brain including to a number of regions such as the amygdala which is involved in emotional associative learning and the medial prefrontal cortex which is important for facilitating flexibility when learning rules change. LC noradrenergic cells participate in both of these functions, but it is not clear how this small population of neurons modulates these partially distinct processes. Here we review anatomical, behavioral, and electrophysiological studies to assess how LC noradrenergic neurons regulate these different aspects of learning and memory. Previous work has demonstrated that subpopulations of LC noradrenergic cells innervate specific brain regions suggesting heterogeneity of function in LC neurons. Furthermore, noradrenaline in mPFC and amygdala has distinct effects on emotional learning and cognitive flexibility. Finally, neural recording data show that LC neurons respond during associative learning and when previously learned task contingencies change. Together, these studies suggest a working model in which distinct and potentially opposing subsets of LC neurons modulate particular learning functions through restricted efferent connectivity with amygdala or mPFC. This type of model may provide a general framework for understanding other neuromodulatory systems, which also exhibit cell type heterogeneity and projection specificity. PMID:26330494

  3. Voltage imaging to understand connections and functions of neuronal circuits

    Science.gov (United States)

    Antic, Srdjan D.; Empson, Ruth M.

    2016-01-01

    Understanding of the cellular mechanisms underlying brain functions such as cognition and emotions requires monitoring of membrane voltage at the cellular, circuit, and system levels. Seminal voltage-sensitive dye and calcium-sensitive dye imaging studies have demonstrated parallel detection of electrical activity across populations of interconnected neurons in a variety of preparations. A game-changing advance made in recent years has been the conceptualization and development of optogenetic tools, including genetically encoded indicators of voltage (GEVIs) or calcium (GECIs) and genetically encoded light-gated ion channels (actuators, e.g., channelrhodopsin2). Compared with low-molecular-weight calcium and voltage indicators (dyes), the optogenetic imaging approaches are 1) cell type specific, 2) less invasive, 3) able to relate activity and anatomy, and 4) facilitate long-term recordings of individual cells' activities over weeks, thereby allowing direct monitoring of the emergence of learned behaviors and underlying circuit mechanisms. We highlight the potential of novel approaches based on GEVIs and compare those to calcium imaging approaches. We also discuss how novel approaches based on GEVIs (and GECIs) coupled with genetically encoded actuators will promote progress in our knowledge of brain circuits and systems. PMID:27075539

  4. Mediodorsal Thalamic Neurons Mirror the Activity of Medial Prefrontal Neurons Responding to Movement and Reinforcement during a Dynamic DNMTP Task.

    Science.gov (United States)

    Miller, Rikki L A; Francoeur, Miranda J; Gibson, Brett M; Mair, Robert G

    2017-01-01

    The mediodorsal nucleus (MD) interacts with medial prefrontal cortex (mPFC) to support learning and adaptive decision-making. MD receives driver (layer 5) and modulatory (layer 6) projections from PFC and is the main source of driver thalamic projections to middle cortical layers of PFC. Little is known about the activity of MD neurons and their influence on PFC during decision-making. We recorded MD neurons in rats performing a dynamic delayed nonmatching to position (dDNMTP) task and compared results to a previous study of mPFC with the same task (Onos et al., 2016). Criterion event-related responses were observed for 22% (254/1179) of neurons recorded in MD, 237 (93%) of which exhibited activity consistent with mPFC response types. More MD than mPFC neurons exhibited responses related to movement (45% vs. 29%) and reinforcement (51% vs. 27%). MD had few responses related to lever presses, and none related to preparation or memory delay, which constituted 43% of event-related activity in mPFC. Comparison of averaged normalized population activity and population response times confirmed the broad similarity of common response types in MD and mPFC and revealed differences in the onset and offset of some response types. Our results show that MD represents information about actions and outcomes essential for decision-making during dDNMTP, consistent with evidence from lesion studies that MD supports reward-based learning and action-selection. These findings support the hypothesis that MD reinforces task-relevant neural activity in PFC that gives rise to adaptive behavior.

  5. Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle

    Science.gov (United States)

    Isomura, Takuya; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-01-01

    Blind source separation is the computation underlying the cocktail party effect––a partygoer can distinguish a particular talker’s voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes’ principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle. PMID:26690814

  6. Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle.

    Directory of Open Access Journals (Sweden)

    Takuya Isomura

    2015-12-01

    Full Text Available Blind source separation is the computation underlying the cocktail party effect--a partygoer can distinguish a particular talker's voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes' principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle.

  7. Automatically tracking neurons in a moving and deforming brain.

    Directory of Open Access Journals (Sweden)

    Jeffrey P Nguyen

    2017-05-01

    Full Text Available Advances in optical neuroimaging techniques now allow neural activity to be recorded with cellular resolution in awake and behaving animals. Brain motion in these recordings pose a unique challenge. The location of individual neurons must be tracked in 3D over time to accurately extract single neuron activity traces. Recordings from small invertebrates like C. elegans are especially challenging because they undergo very large brain motion and deformation during animal movement. Here we present an automated computer vision pipeline to reliably track populations of neurons with single neuron resolution in the brain of a freely moving C. elegans undergoing large motion and deformation. 3D volumetric fluorescent images of the animal's brain are straightened, aligned and registered, and the locations of neurons in the images are found via segmentation. Each neuron is then assigned an identity using a new time-independent machine-learning approach we call Neuron Registration Vector Encoding. In this approach, non-rigid point-set registration is used to match each segmented neuron in each volume with a set of reference volumes taken from throughout the recording. The way each neuron matches with the references defines a feature vector which is clustered to assign an identity to each neuron in each volume. Finally, thin-plate spline interpolation is used to correct errors in segmentation and check consistency of assigned identities. The Neuron Registration Vector Encoding approach proposed here is uniquely well suited for tracking neurons in brains undergoing large deformations. When applied to whole-brain calcium imaging recordings in freely moving C. elegans, this analysis pipeline located 156 neurons for the duration of an 8 minute recording and consistently found more neurons more quickly than manual or semi-automated approaches.

  8. The age of enlightenment: evolving opportunities in brain research through optical manipulation of neuronal activity

    Directory of Open Access Journals (Sweden)

    Jason eJerome

    2011-12-01

    Full Text Available Optical manipulation of neuronal activity has rapidly developed into the most powerful and widely used approach to study mechanisms related to neuronal connectivity over a range of scales. Since the early use of single site uncaging to map network connectivity, rapid technological development of light modulation techniques has added important new options, such as fast scanning photostimulation, massively parallel control of light stimuli, holographic uncaging and 2-photon stimulation techniques. Exciting new developments in optogenetics complement neurotransmitter uncaging techniques by providing cell-type specificity and in vivo usability, providing optical access to the neural substrates of behavior. Here we review the rapid evolution of methods for the optical manipulation of neuronal activity, emphasizing crucial recent developments.

  9. The age of enlightenment: evolving opportunities in brain research through optical manipulation of neuronal activity.

    Science.gov (United States)

    Jerome, Jason; Heck, Detlef H

    2011-01-01

    Optical manipulation of neuronal activity has rapidly developed into the most powerful and widely used approach to study mechanisms related to neuronal connectivity over a range of scales. Since the early use of single site uncaging to map network connectivity, rapid technological development of light modulation techniques has added important new options, such as fast scanning photostimulation, massively parallel control of light stimuli, holographic uncaging, and two-photon stimulation techniques. Exciting new developments in optogenetics complement neurotransmitter uncaging techniques by providing cell-type specificity and in vivo usability, providing optical access to the neural substrates of behavior. Here we review the rapid evolution of methods for the optical manipulation of neuronal activity, emphasizing crucial recent developments.

  10. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. A neurogenetic dissociation between punishment-, reward- and relief-learning in Drosophila

    Directory of Open Access Journals (Sweden)

    ayse Yarali

    2010-12-01

    Full Text Available What is particularly worth remembering about a traumatic experience is what brought it about, and what made it cease. For example, fruit flies avoid an odour which during training had preceded electric shock punishment; on the other hand, if the odour had followed shock during training, it is later on approached as a signal for the relieving end of shock. We provide a neurogenetic analysis of such relief learning. Blocking, using UAS-shibirets1, the output from a particular set of dopaminergic neurons defined by the TH-Gal4 driver partially impaired punishment learning, but left relief learning intact. Thus, with respect to these particular neurons, relief learning differs from punishment learning. Targeting another set of dopaminergic/ serotonergic neurons defined by the DDC-Gal4 driver on the other hand affected neither punishment nor relief learning. As for the octopaminergic system, the tbhM18 mutation, compromising octopamine biosynthesis, partially impaired sugar-reward learning, but not relief learning. Thus, with respect to this particular mutation, relief learning and reward learning are dissociated. Finally, blocking output from the set of octopaminergic/ tyraminergic neurons defined by the TDC2-Gal4 driver affected neither reward, nor relief learning. We conclude that regarding the used genetic tools, relief learning is neurogenetically dissociated from both punishment and reward learning.

  12. Combining microfluidics, optogenetics and calcium imaging to study neuronal communication in vitro.

    Science.gov (United States)

    Renault, Renaud; Sukenik, Nirit; Descroix, Stéphanie; Malaquin, Laurent; Viovy, Jean-Louis; Peyrin, Jean-Michel; Bottani, Samuel; Monceau, Pascal; Moses, Elisha; Vignes, Maéva

    2015-01-01

    In this paper we report the combination of microfluidics, optogenetics and calcium imaging as a cheap and convenient platform to study synaptic communication between neuronal populations in vitro. We first show that Calcium Orange indicator is compatible in vitro with a commonly used Channelrhodopsine-2 (ChR2) variant, as standard calcium imaging conditions did not alter significantly the activity of transduced cultures of rodent primary neurons. A fast, robust and scalable process for micro-chip fabrication was developed in parallel to build micro-compartmented cultures. Coupling optical fibers to each micro-compartment allowed for the independent control of ChR2 activation in the different populations without crosstalk. By analyzing the post-stimuli activity across the different populations, we finally show how this platform can be used to evaluate quantitatively the effective connectivity between connected neuronal populations.

  13. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    Science.gov (United States)

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  14. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    Directory of Open Access Journals (Sweden)

    Danish Shehzad

    2016-01-01

    Full Text Available Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  15. Dynamic neuronal ensembles: Issues in representing structure change in object-oriented, biologically-based brain models

    Energy Technology Data Exchange (ETDEWEB)

    Vahie, S.; Zeigler, B.P.; Cho, H. [Univ. of Arizona, Tucson, AZ (United States)

    1996-12-31

    This paper describes the structure of dynamic neuronal ensembles (DNEs). DNEs represent a new paradigm for learning, based on biological neural networks that use variable structures. We present a computational neural element that demonstrates biological neuron functionality such as neurotransmitter feedback absolute refractory period and multiple output potentials. More specifically, we will develop a network of neural elements that have the ability to dynamically strengthen, weaken, add and remove interconnections. We demonstrate that the DNE is capable of performing dynamic modifications to neuron connections and exhibiting biological neuron functionality. In addition to its applications for learning, DNEs provide an excellent environment for testing and analysis of biological neural systems. An example of habituation and hyper-sensitization in biological systems, using a neural circuit from a snail is presented and discussed. This paper provides an insight into the DNE paradigm using models developed and simulated in DEVS.

  16. Mushroom body efferent neurons responsible for aversive olfactory memory retrieval in Drosophila.

    Science.gov (United States)

    Séjourné, Julien; Plaçais, Pierre-Yves; Aso, Yoshinori; Siwanowicz, Igor; Trannoy, Séverine; Thoma, Vladimiros; Tedjakumala, Stevanus R; Rubin, Gerald M; Tchénio, Paul; Ito, Kei; Isabel, Guillaume; Tanimoto, Hiromu; Preat, Thomas

    2011-06-19

    Aversive olfactory memory is formed in the mushroom bodies in Drosophila melanogaster. Memory retrieval requires mushroom body output, but the manner in which a memory trace in the mushroom body drives conditioned avoidance of a learned odor remains unknown. To identify neurons that are involved in olfactory memory retrieval, we performed an anatomical and functional screen of defined sets of mushroom body output neurons. We found that MB-V2 neurons were essential for retrieval of both short- and long-lasting memory, but not for memory formation or memory consolidation. MB-V2 neurons are cholinergic efferent neurons that project from the mushroom body vertical lobes to the middle superiormedial protocerebrum and the lateral horn. Notably, the odor response of MB-V2 neurons was modified after conditioning. As the lateral horn has been implicated in innate responses to repellent odorants, we propose that MB-V2 neurons recruit the olfactory pathway involved in innate odor avoidance during memory retrieval.

  17. Effects of Chinese herbal medicine Yinsiwei compound on spatial learning and memory ability and the ultrastructure of hippocampal neurons in a rat model of sporadic Alzheimer disease.

    Science.gov (United States)

    Diwu, Yong-chang; Tian, Jin-zhou; Shi, Jing

    2011-02-01

    To study the effects of Chinese herbal medicine Yinsiwei compound (YSW) on spatial learning and memory ability in rats with sporadic Alzheimer disease (SAD) and the ultrastructural basis of the hippocampal neurons. A rat model of SAD was established by intracerebroventricular injection of streptozotocin. The rats were divided into six groups: sham-operation group, model group, donepezil control group, and YSW low, medium and high dose groups. Drug interventions were started on the 21st day after modeling and each treatment group was given the corresponding drugs by gavage for two months. Meanwhile, the model group and the sham-operation group were given the same volume of distilled water by gavage once a day for two months. The Morris water maze was adopted to test spatial learning and memory ability of the rats. The place navigation test and the spatial probe test were conducted. The escape latency, total swimming distance and swimming time in the target quadrant of the rats were recorded. Also, the hippocampus tissues of rats were taken out and the ultrastructure of hippocampus neurons were observed by an electron microscope. In the place navigation test, compared with the model group, the mean escape latency and the total swimming distance of the donepezil group and the YSW low, medium and high dose groups were significantly shortened (Pmicroscope also confirmed the efficacy of the drug treatment. Chinese herbal medicine YSW compound can improve spatial learning and memory impairment of rats with SAD. The ultrastructural basis may be that it can protect the microtubule structures of hippocampal neurons and prevent nerve axons from being damaged.

  18. Signals and Circuits in the Purkinje Neuron

    Directory of Open Access Journals (Sweden)

    Ze'ev R Abrams

    2011-09-01

    Full Text Available Purkinje neurons in the cerebellum have over 100,000 inputs organized in an orthogonal geometry, and a single output channel. As the sole output of the cerebellar cortex layer, their complex firing pattern has been associated with motor control and learning. As such they have been extensively modeled and measured using tools ranging from electrophysiology and neuroanatomy, to dynamic systems and artificial intelligence methods. However, there is an alternative approach to analyze and describe the neuronal output of these cells using concepts from Electrical Engineering, particularly signal processing and digital/analog circuits. By viewing the Purkinje neuron as an unknown circuit to be reverse-engineered, we can use the tools that provide the foundations of today’s integrated circuits and communication systems to analyze the Purkinje system at the circuit level. We use Fourier transforms to analyze and isolate the inherent frequency modes in the Purkinje neuron and define 3 unique frequency ranges associated with the cells’ output. Comparing the Purkinje neuron to a signal generator that can be externally modulated adds an entire level of complexity to the functional role of these neurons both in terms of data analysis and information processing, relying on Fourier analysis methods in place of statistical ones. We also re-describe some of the recent literature in the field, using the nomenclature of signal processing. Furthermore, by comparing the experimental data of the past decade with basic electronic circuitry, we can resolve the outstanding controversy in the field, by recognizing that the Purkinje neuron can act as a multivibrator circuit.

  19. Two bridges between biology and learning

    Directory of Open Access Journals (Sweden)

    Jorun Nyléhn

    2016-04-01

    Full Text Available Human biology, in terms of organization of our brains and our evolutionary past, constrains and enables learning. Two examples where neurobiology and evolution influences learning are given and discussed in relation to education: mirror neurons and adaptive memory. Mirror neurons serves imitation and understanding of other peoples intentions. Adaptive memory implies that our memory is an adaptation influenced by our evolutionary past, enabling us to solve problems in the present and in the future. Additionally, the aim is to contribute to bridges between natural and social sciences in an attempt to achieve an improved understanding of learning. The relevance of perspectives on learning founded in biology are discussed, and the article argues for including biological perspectives in discussions of education and learning processes.

  20. Learning-dependent neurogenesis in the olfactory bulb determines long-term olfactory memory.

    Science.gov (United States)

    Sultan, S; Mandairon, N; Kermen, F; Garcia, S; Sacquet, J; Didier, A

    2010-07-01

    Inhibitory interneurons of the olfactory bulb are subjected to permanent adult neurogenesis. Their number is modulated by learning, suggesting that they could play a role in plastic changes of the bulbar network associated with olfactory memory. Adult male C57BL/6 mice were trained in an associative olfactory task, and we analyzed long-term retention of the task 5, 30, and 90 d post-training. In parallel, we assessed the fate of these newborn cells, mapped their distribution in the olfactory bulb and measured their functional implication using the immediate early gene Zif268. In a second set of experiments, we pharmacologically modulated glutamatergic transmission and using the same behavioral task assessed the consequences on memory retention and neurogenesis. Finally, by local infusion of an antimitotic drug, we selectively blocked neurogenesis during acquisition of the task and looked at the effects on memory retention. First we demonstrated that retrieval of an associative olfactory task recruits the newborn neurons in odor-specific areas of the olfactory bulb selected to survive during acquisition of the task and that it does this in a manner that depends on the strength of learning. We then demonstrated that acquisition is not dependent on neurogenesis if long-term retention of the task is abolished by blocking neurogenesis. Adult-born neurons are thus involved in changes in the neural representation of an odor; this underlies long-term olfactory memory as the strength of learning is linked to the duration of this memory. Neurogenesis thus plays a crucial role in long-term olfactory memory.

  1. NeuronBank: a tool for cataloging neuronal circuitry

    Directory of Open Access Journals (Sweden)

    Paul S Katz

    2010-04-01

    Full Text Available The basic unit of any nervous system is the neuron. Therefore, understanding the operation of nervous systems ultimately requires an inventory of their constituent neurons and synaptic connectivity, which form neural circuits. The presence of uniquely identifiable neurons or classes of neurons in many invertebrates has facilitated the construction of cellular-level connectivity diagrams that can be generalized across individuals within a species. Homologous neurons can also be recognized across species. Here we describe NeuronBank.org, a web-based tool that we are developing for cataloging, searching, and analyzing neuronal circuitry within and across species. Information from a single species is represented in an individual branch of NeuronBank. Users can search within a branch or perform queries across branches to look for similarities in neuronal circuits across species. The branches allow for an extensible ontology so that additional characteristics can be added as knowledge grows. Each entry in NeuronBank generates a unique accession ID, allowing it to be easily cited. There is also an automatic link to a Wiki page allowing an encyclopedic explanation of the entry. All of the 44 previously published neurons plus one previously unpublished neuron from the mollusc, Tritonia diomedea, have been entered into a branch of NeuronBank as have 4 previously published neurons from the mollusc, Melibe leonina. The ability to organize information about neuronal circuits will make this information more accessible, ultimately aiding research on these important models.

  2. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  3. Proinflammatory Factors Mediate Paclitaxel-Induced Impairment of Learning and Memory

    Directory of Open Access Journals (Sweden)

    Zhao Li

    2018-01-01

    Full Text Available The chemotherapeutic agent paclitaxel is widely used for cancer treatment. Paclitaxel treatment impairs learning and memory function, a side effect that reduces the quality of life of cancer survivors. However, the neural mechanisms underlying paclitaxel-induced impairment of learning and memory remain unclear. Paclitaxel treatment leads to proinflammatory factor release and neuronal apoptosis. Thus, we hypothesized that paclitaxel impairs learning and memory function through proinflammatory factor-induced neuronal apoptosis. Neuronal apoptosis was assessed by TUNEL assay in the hippocampus. Protein expression levels of tumor necrosis factor-α (TNF-α and interleukin-1β (IL-1β in the hippocampus tissue were analyzed by Western blot assay. Spatial learning and memory function were determined by using the Morris water maze (MWM test. Paclitaxel treatment significantly increased the escape latencies and decreased the number of crossing in the MWM test. Furthermore, paclitaxel significantly increased the number of TUNEL-positive neurons in the hippocampus. Also, paclitaxel treatment increased the expression levels of TNF-α and IL-1β in the hippocampus tissue. In addition, the TNF-α synthesis inhibitor thalidomide significantly attenuated the number of paclitaxel-induced TUNEL-positive neurons in the hippocampus and restored the impaired spatial learning and memory function in paclitaxel-treated rats. These data suggest that TNF-α is critically involved in the paclitaxel-induced impairment of learning and memory function.

  4. Central serotonergic neurons activate and recruit thermogenic brown and beige fat and regulate glucose and lipid homeostasis

    DEFF Research Database (Denmark)

    McGlashon, Jacob M; Gorecki, Michelle C; Kozlowski, Amanda E

    2015-01-01

    Thermogenic brown and beige adipocytes convert chemical energy to heat by metabolizing glucose and lipids. Serotonin (5-HT) neurons in the CNS are essential for thermoregulation and accordingly may control metabolic activity of thermogenic fat. To test this, we generated mice in which the human...... adipose tissue (WAT). In parallel, blood glucose increased 3.5-fold, free fatty acids 13.4-fold, and triglycerides 6.5-fold. Similar BAT and beige fat defects occurred in Lmx1b(f/f)ePet1(Cre) mice in which 5-HT neurons fail to develop in utero. We conclude 5-HT neurons play a major role in regulating...

  5. Single neurons in prefrontal cortex encode abstract rules.

    Science.gov (United States)

    Wallis, J D; Anderson, K C; Miller, E K

    2001-06-21

    The ability to abstract principles or rules from direct experience allows behaviour to extend beyond specific circumstances to general situations. For example, we learn the 'rules' for restaurant dining from specific experiences and can then apply them in new restaurants. The use of such rules is thought to depend on the prefrontal cortex (PFC) because its damage often results in difficulty in following rules. Here we explore its neural basis by recording from single neurons in the PFC of monkeys trained to use two abstract rules. They were required to indicate whether two successively presented pictures were the same or different depending on which rule was currently in effect. The monkeys performed this task with new pictures, thus showing that they had learned two general principles that could be applied to stimuli that they had not yet experienced. The most prevalent neuronal activity observed in the PFC reflected the coding of these abstract rules.

  6. Sulforaphane Prevents Neuronal Apoptosis and Memory Impairment in Diabetic Rats

    Directory of Open Access Journals (Sweden)

    Gengyin Wang

    2016-08-01

    Full Text Available Background/Aims: To explore the effects of sulforaphane (SFN on neuronal apoptosis in hippocampus and memory impairment in diabetic rats. Methods: Thirty male rats were randomly divided into normal control, diabetic model and SFN treatment groups (N = 10 in each group. Streptozotocin (STZ was applied to establish diabetic model. Water Morris maze task was applied to test learning and memory. Tunel assaying was used to detect apoptosis in hippocampus. The expressions of Caspase-3 and myeloid cell leukemia 1(MCL-1 were detected by western blotting. Neurotrophic factor levels and AKT/GSK3β pathway were also detected. Results: Compared with normal control, learning and memory were apparently impaired, with up-regulation of Caspase-3 and down-regulation of MCL-1 in diabetic rats. Apoptotic neurons were also found in CA1 region after diabetic modeling. By contrast, SFN treatment prevented the memory impairment, decreased the apoptosis of hippocampal neurons. SFN also attenuated the abnormal expression of Caspase-3 and MCL-1 in diabetic model. Mechanically, SFN treatment reversed diabetic modeling-induced decrease of p-Akt, p-GSK3β, NGF and BDNF expressions. Conclusion: SFN could prevent the memory impairment and apoptosis of hippocampal neurons in diabetic rat. The possible mechanism was related to the regulation of neurotropic factors and Akt/GSK3β pathway.

  7. Organization of left–right coordination of neuronal activity in the mammalian spinal cord: Insights from computational modelling

    Science.gov (United States)

    Shevtsova, Natalia A; Talpalar, Adolfo E; Markin, Sergey N; Harris-Warrick, Ronald M; Kiehn, Ole; Rybak, Ilya A

    2015-01-01

    Different locomotor gaits in mammals, such as walking or galloping, are produced by coordinated activity in neuronal circuits in the spinal cord. Coordination of neuronal activity between left and right sides of the cord is provided by commissural interneurons (CINs), whose axons cross the midline. In this study, we construct and analyse two computational models of spinal locomotor circuits consisting of left and right rhythm generators interacting bilaterally via several neuronal pathways mediated by different CINs. The CIN populations incorporated in the models include the genetically identified inhibitory (V0D) and excitatory (V0V) subtypes of V0 CINs and excitatory V3 CINs. The model also includes the ipsilaterally projecting excitatory V2a interneurons mediating excitatory drive to the V0V CINs. The proposed network architectures and CIN connectivity allow the models to closely reproduce and suggest mechanistic explanations for several experimental observations. These phenomena include: different speed-dependent contributions of V0D and V0V CINs and V2a interneurons to left–right alternation of neural activity, switching gaits between the left–right alternating walking-like activity and the left–right synchronous hopping-like pattern in mutants lacking specific neuron classes, and speed-dependent asymmetric changes of flexor and extensor phase durations. The models provide insights into the architecture of spinal network and the organization of parallel inhibitory and excitatory CIN pathways and suggest explanations for how these pathways maintain alternating and synchronous gaits at different locomotor speeds. The models propose testable predictions about the neural organization and operation of mammalian locomotor circuits. Key points Coordination of neuronal activity between left and right sides of the mammalian spinal cord is provided by several sets of commissural interneurons (CINs) whose axons cross the midline. Genetically identified inhibitory V

  8. Military Curricula for Vocational & Technical Education. Basic Electricity and Electronics Individualized Learning System. CANTRAC A-100-0010. Module Fourteen: Parallel AC Resistive-Reactive Circuits. Study Booklet.

    Science.gov (United States)

    Chief of Naval Education and Training Support, Pensacola, FL.

    This individualized learning module on parallel alternating current resistive-reaction circuits is one in a series of modules for a course in basic electricity and electronics. The course is one of a number of military-developed curriculum packages selected for adaptation to vocational instructional and curriculum development in a civilian…

  9. Neurite, a finite difference large scale parallel program for the simulation of electrical signal propagation in neurites under mechanical loading.

    Directory of Open Access Journals (Sweden)

    Julián A García-Grajales

    Full Text Available With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite--explicit and implicit--were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon

  10. Male pheromone protein components activate female vomeronasal neurons in the salamander Plethodon shermani

    Directory of Open Access Journals (Sweden)

    Feldhoff Pamela W

    2006-03-01

    Full Text Available Abstract Background The mental gland pheromone of male Plethodon salamanders contains two main protein components: a 22 kDa protein named Plethodon Receptivity Factor (PRF and a 7 kDa protein named Plethodon Modulating Factor (PMF, respectively. Each protein component individually has opposing effects on female courtship behavior, with PRF shortening and PMF lengthening courtship. In this study, we test the hypothesis that PRF or PMF individually activate vomeronasal neurons. The agmatine-uptake technique was used to visualize chemosensory neurons that were activated by each protein component individually. Results Vomeronasal neurons exposed to agmatine in saline did not demonstrate significant labeling. However, a population of vomeronasal neurons was labeled following exposure to either PRF or PMF. When expressed as a percent of control level labeled cells, PRF labeled more neurons than did PMF. These percentages for PRF and PMF, added together, parallel the percentage of labeled vomeronasal neurons when females are exposed to the whole pheromone. Conclusion This study suggests that two specific populations of female vomeronasal neurons are responsible for responding to each of the two components of the male pheromone mixture. These two neural populations, therefore, could express different receptors which, in turn, transmit different information to the brain, thus accounting for the different female behavior elicited by each pheromone component.

  11. A developmental approach of imitation to study the emergence of mirror neurons in a sensory-motor controller

    Directory of Open Access Journals (Sweden)

    Gaussier Philippe

    2011-12-01

    Full Text Available Mirror neurons have often been considered as the explanation of how primates can imitate. In this paper, we show that a simple neural network architecture that learns visuo-motor associations can be enough to let low level imitation emerge without a priori mirror neurons. Adding sequence learning mechanisms and action inhibition allows to perform deferred imitation of gestures demonstrated visually or by body manipulation. With the building of a cognitive map giving the capability of learning plans, we can study in our model the emergence of both low level and high level resonances highlighted by Rizzolatti et al.

  12. Pretreatment with apoaequorin protects hippocampal CA1 neurons from oxygen-glucose deprivation.

    Science.gov (United States)

    Detert, Julia A; Adams, Erin L; Lescher, Jacob D; Lyons, Jeri-Anne; Moyer, James R

    2013-01-01

    Ischemic stroke affects ∼795,000 people each year in the U.S., which results in an estimated annual cost of $73.7 billion. Calcium is pivotal in a variety of neuronal signaling cascades, however, during ischemia, excess calcium influx can trigger excitotoxic cell death. Calcium binding proteins help neurons regulate/buffer intracellular calcium levels during ischemia. Aequorin is a calcium binding protein isolated from the jellyfish Aequorea victoria, and has been used for years as a calcium indicator, but little is known about its neuroprotective properties. The present study used an in vitro rat brain slice preparation to test the hypothesis that an intra-hippocampal infusion of apoaequorin (the calcium binding component of aequorin) protects neurons from ischemic cell death. Bilaterally cannulated rats received an apoaequorin infusion in one hemisphere and vehicle control in the other. Hippocampal slices were then prepared and subjected to 5 minutes of oxygen-glucose deprivation (OGD), and cell death was assayed by trypan blue exclusion. Apoaequorin dose-dependently protected neurons from OGD--doses of 1% and 4% (but not 0.4%) significantly decreased the number of trypan blue-labeled neurons. This effect was also time dependent, lasting up to 48 hours. This time dependent effect was paralleled by changes in cytokine and chemokine expression, indicating that apoaequorin may protect neurons via a neuroimmunomodulatory mechanism. These data support the hypothesis that pretreatment with apoaequorin protects neurons against ischemic cell death, and may be an effective neurotherapeutic.

  13. Sensory Neuron Fates Are Distinguished by a Transcriptional Switch that Regulates Dendrite Branch Stabilization

    Science.gov (United States)

    Smith, Cody J.; O’Brien, Timothy; Chatzigeorgiou, Marios; Spencer, W. Clay; Feingold-Link, Elana; Husson, Steven J.; Hori, Sayaka; Mitani, Shohei; Gottschalk, Alexander; Schafer, William R.; Miller, David M.

    2013-01-01

    SUMMARY Sensory neurons adopt distinct morphologies and functional modalities to mediate responses to specific stimuli. Transcription factors and their downstream effectors orchestrate this outcome but are incompletely defined. Here, we show that different classes of mechanosensory neurons in C. elegans are distinguished by the combined action of the transcription factors MEC-3, AHR-1, and ZAG-1. Low levels of MEC-3 specify the elaborate branching pattern of PVD nociceptors, whereas high MEC-3 is correlated with the simple morphology of AVM and PVM touch neurons. AHR-1 specifies AVM touch neuron fate by elevating MEC-3 while simultaneously blocking expression of nociceptive genes such as the MEC-3 target, the claudin-like membrane protein HPO-30, that promotes the complex dendritic branching pattern of PVD. ZAG-1 exercises a parallel role to prevent PVM from adopting the PVD fate. The conserved dendritic branching function of the Drosophila AHR-1 homolog, Spineless, argues for similar pathways in mammals. PMID:23889932

  14. Neuronal activity rapidly induces transcription of the CREB-regulated microRNA-132, in vivo

    DEFF Research Database (Denmark)

    Nudelman, Aaron Samuel; DiRocco, Derek P; Lambert, Talley J

    2010-01-01

    Activity-dependent changes in gene-expression are believed to underlie the molecular representation of memory. In this study, we report that in vivo activation of neurons rapidly induces the CREB-regulated microRNA miR-132. To determine if production of miR-132 is regulated by neuronal activity its......, olfactory bulb, and striatum by contextual fear conditioning, odor-exposure, and cocaine-injection, respectively, also increased pri-miR-132. Induction kinetics of pri-miR-132 were monitored and found to parallel those of immediate early genes, peaking at 45 min and returning to basal levels within 2 h...

  15. Impaired rRNA synthesis triggers homeostatic responses in hippocampal neurons

    Directory of Open Access Journals (Sweden)

    Anna eKiryk

    2013-11-01

    Full Text Available Decreased rRNA synthesis and nucleolar disruption, known as nucleolar stress, are primary signs of cellular stress associated with aging and neurodegenerative disorders. Silencing of rDNA occurs during early stages of Alzheimer´s disease (AD and may play a role in dementia. Moreover aberrant regulation of the protein synthesis machinery is present in the brain of suicide victims and implicates the epigenetic modulation of rRNA. Recently, we developed unique mouse models characterized by nucleolar stress in neurons. We inhibited RNA polymerase I by genetic ablation of the basal transcription factor TIF-IA in adult hippocampal neurons. Nucleolar stress resulted in progressive neurodegeneration, although with a differential vulnerability within the CA1, CA3 and dentate gyrus. Here, we investigate the consequences of nucleolar stress on learning and memory. The mutant mice show normal performance in the Morris water maze and in other behavioral tests, suggesting the activation of adaptive mechanisms. In fact, we observe a significantly enhanced learning and re-learning corresponding to the initial inhibition of rRNA transcription. This phenomenon is accompanied by aberrant synaptic plasticity. By the analysis of nucleolar function and integrity, we find that the synthesis of rRNA is later restored. Gene expression profiling shows that thirty-six transcripts are differentially expressed in comparison to the control group in absence of neurodegeneration. Additionally, we observe a significant enrichment of the putative serum response factor (SRF binding sites in the promoters of the genes with changed expression, indicating potential adaptive mechanisms mediated by the mitogen-activated protein kinase pathway. In the dentate gyrus a neurogenetic response might compensate the initial molecular deficits. These results underscore the role of nucleolar stress in neuronal homeostasis and open a new ground for therapeutic strategies aiming at preserving

  16. Statistical characteristics of climbing fiber spikes necessary for efficient cerebellar learning.

    Science.gov (United States)

    Kuroda, S; Yamamoto, K; Miyamoto, H; Doya, K; Kawat, M

    2001-03-01

    Mean firing rates (MFRs), with analogue values, have thus far been used as information carriers of neurons in most brain theories of learning. However, the neurons transmit the signal by spikes, which are discrete events. The climbing fibers (CFs), which are known to be essential for cerebellar motor learning, fire at the ultra-low firing rates (around 1 Hz), and it is not yet understood theoretically how high-frequency information can be conveyed and how learning of smooth and fast movements can be achieved. Here we address whether cerebellar learning can be achieved by CF spikes instead of conventional MFR in an eye movement task, such as the ocular following response (OFR), and an arm movement task. There are two major afferents into cerebellar Purkinje cells: parallel fiber (PF) and CF, and the synaptic weights between PFs and Purkinje cells have been shown to be modulated by the stimulation of both types of fiber. The modulation of the synaptic weights is regulated by the cerebellar synaptic plasticity. In this study we simulated cerebellar learning using CF signals as spikes instead of conventional MFR. To generate the spikes we used the following four spike generation models: (1) a Poisson model in which the spike interval probability follows a Poisson distribution, (2) a gamma model in which the spike interval probability follows the gamma distribution, (3) a max model in which a spike is generated when a synaptic input reaches maximum, and (4) a threshold model in which a spike is generated when the input crosses a certain small threshold. We found that, in an OFR task with a constant visual velocity, learning was successful with stochastic models, such as Poisson and gamma models, but not in the deterministic models, such as max and threshold models. In an OFR with a stepwise velocity change and an arm movement task, learning could be achieved only in the Poisson model. In addition, for efficient cerebellar learning, the distribution of CF spike

  17. Activity-Dependent Neurorehabilitation Beyond Physical Trainings: "Mental Exercise" Through Mirror Neuron Activation.

    Science.gov (United States)

    Yuan, Ti-Fei; Chen, Wei; Shan, Chunlei; Rocha, Nuno; Arias-Carrión, Oscar; Paes, Flávia; de Sá, Alberto Souza; Machado, Sergio

    2015-01-01

    The activity dependent brain repair mechanism has been widely adopted in many types of neurorehabilitation. The activity leads to target specific and non-specific beneficial effects in different brain regions, such as the releasing of neurotrophic factors, modulation of the cytokines and generation of new neurons in adult hood. However physical exercise program clinically are limited to some of the patients with preserved motor functions; while many patients suffered from paralysis cannot make such efforts. Here the authors proposed the employment of mirror neurons system in promoting brain rehabilitation by "observation based stimulation". Mirror neuron system has been considered as an important basis for action understanding and learning by mimicking others. During the action observation, mirror neuron system mediated the direct activation of the same group of motor neurons that are responsible for the observed action. The effect is clear, direct, specific and evolutionarily conserved. Moreover, recent evidences hinted for the beneficial effects on stroke patients after mirror neuron system activation therapy. Finally some music-relevant therapies were proposed to be related with mirror neuron system.

  18. (S)Pot on Mitochondria: Cannabinoids Disrupt Cellular Respiration to Limit Neuronal Activity.

    Science.gov (United States)

    Harkany, Tibor; Horvath, Tamas L

    2017-01-10

    Classical views posit G protein-coupled cannabinoid receptor 1s (CB1Rs) at the cell surface with cytosolic Giα-mediated signal transduction. Hebert-Chatelain et al. (2016) instead place CB 1 Rs at mitochondria limiting neuronal respiration by soluble adenylyl cyclase-dependent modulation of complex I activity. Thus, neuronal bioenergetics link to synaptic plasticity and, globally, learning and memory. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Decreased α1-adrenergic receptor-mediated inositide hydrolysis in neurons from hypertensive rat brain

    International Nuclear Information System (INIS)

    Feldstein, J.B.; Gonzales, R.A.; Baker, S.P.; Sumners, C.; Crews, F.T.; Raizada, M.K.

    1986-01-01

    The expression of α 1 -adrenergic receptors and norepinephrine (NE)-stimulated hydrolysis of inositol phospholipid has been studied in neuronal cultures from the brains of normotensive (Wistar-Kyoto, WKY) and spontaneously hypertensive (SH) rats. Binding of 125 I-1-[β-(4-hydroxyphenyl)-ethyl-aminomethyl] tetralone (HEAT) to neuronal membranes was 68-85% specific and was rapid. Competition-inhibition experiments with various agonists and antagonists suggested that 125 I-HEAT bound selectively to α 1 -adrenergic receptors. Specific binding of 125 I-HEAT to neuronal membranes from SH rat brain cultures was 30-45% higher compared with binding in WKY normotensive controls. This increase was attributed to an increase in the number of α 1 -adrenergic receptors on SH rat brain neurons. Incubation of neuronal cultures of rat brain from both strains with NE resulted in a concentration-dependent stimulation of release of inositol phosphates, although neurons from SH rat brains were 40% less responsive compared with WKY controls. The decrease in responsiveness of SH rat brain neurons to NE, even though the α 1 -adrenergic receptors are increased, does not appear to be due to a general defect in membrane receptors and postreceptor signal transduction mechanisms. This is because neither the number of muscarinic-cholinergic receptors nor the carbachol-stimulated release of inositol phosphates is different in neuronal cultures from the brains of SH rats compared with neuronal cultures from the brains of WKY rats. These observations suggest that the increased expression of α 1 -adrenergic receptors does not parallel the receptor-mediated inositol phosphate hydrolysis in neuronal cultures from SH rat brain

  20. Tracking the fear memory engram: discrete populations of neurons within amygdala, hypothalamus, and lateral septum are specifically activated by auditory fear conditioning

    Science.gov (United States)

    Wilson, Yvette M.; Gunnersen, Jenny M.; Murphy, Mark

    2015-01-01

    Memory formation is thought to occur via enhanced synaptic connectivity between populations of neurons in the brain. However, it has been difficult to localize and identify the neurons that are directly involved in the formation of any specific memory. We have previously used fos-tau-lacZ (FTL) transgenic mice to identify discrete populations of neurons in amygdala and hypothalamus, which were specifically activated by fear conditioning to a context. Here we have examined neuronal activation due to fear conditioning to a more specific auditory cue. Discrete populations of learning-specific neurons were identified in only a small number of locations in the brain, including those previously found to be activated in amygdala and hypothalamus by context fear conditioning. These populations, each containing only a relatively small number of neurons, may be directly involved in fear learning and memory. PMID:26179231

  1. Learning and coding in biological neural networks

    Science.gov (United States)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  2. Stochastic Variational Learning in Recurrent Spiking Networks

    Directory of Open Access Journals (Sweden)

    Danilo eJimenez Rezende

    2014-04-01

    Full Text Available The ability to learn and perform statistical inference with biologically plausible recurrent network of spiking neurons is an important step towards understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators conveying information about ``novelty on a statistically rigorous ground.Simulations show that our model is able to learn bothstationary and non-stationary patterns of spike trains.We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.

  3. Stochastic variational learning in recurrent spiking networks.

    Science.gov (United States)

    Jimenez Rezende, Danilo; Gerstner, Wulfram

    2014-01-01

    The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about "novelty" on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.

  4. Long-term memory in Aplysia modulates the total number of varicosities of single identified sensory neurons.

    OpenAIRE

    Bailey, C H; Chen, M

    1988-01-01

    The morphological consequences of long-term habituation and sensitization of the gill withdrawal reflex in Aplysia california were explored by examining the total number of presynaptic varicosities of single identified sensory neurons (a critical site of plasticity for the biochemical and biophysical changes that underlie both types of learning) in control and behaviorally trained animals. Sensory neurons from habituated animals had 35% fewer synaptic varicosities than did sensory neurons fro...

  5. Error-backpropagation in temporally encoded networks of spiking neurons

    NARCIS (Netherlands)

    S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)

    2000-01-01

    textabstractFor a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, emph{SpikeProp, akin to traditional error-backpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm,

  6. Kappe neurons, a novel population of olfactory sensory neurons.

    Science.gov (United States)

    Ahuja, Gaurav; Bozorg Nia, Shahrzad; Zapilko, Veronika; Shiriagin, Vladimir; Kowatschew, Daniel; Oka, Yuichiro; Korsching, Sigrun I

    2014-02-10

    Perception of olfactory stimuli is mediated by distinct populations of olfactory sensory neurons, each with a characteristic set of morphological as well as functional parameters. Beyond two large populations of ciliated and microvillous neurons, a third population, crypt neurons, has been identified in teleost and cartilaginous fishes. We report here a novel, fourth olfactory sensory neuron population in zebrafish, which we named kappe neurons for their characteristic shape. Kappe neurons are identified by their Go-like immunoreactivity, and show a distinct spatial distribution within the olfactory epithelium, similar to, but significantly different from that of crypt neurons. Furthermore, kappe neurons project to a single identified target glomerulus within the olfactory bulb, mdg5 of the mediodorsal cluster, whereas crypt neurons are known to project exclusively to the mdg2 glomerulus. Kappe neurons are negative for established markers of ciliated, microvillous and crypt neurons, but appear to have microvilli. Kappe neurons constitute the fourth type of olfactory sensory neurons reported in teleost fishes and their existence suggests that encoding of olfactory stimuli may require a higher complexity than hitherto assumed already in the peripheral olfactory system.

  7. Neuronal representations of stimulus associations develop in the temporal lobe during learning

    OpenAIRE

    Messinger, Adam; Squire, Larry R.; Zola, Stuart M.; Albright, Thomas D.

    2001-01-01

    Visual stimuli that are frequently seen together become associated in long-term memory, such that the sight of one stimulus readily brings to mind the thought or image of the other. It has been hypothesized that acquisition of such long-term associative memories proceeds via the strengthening of connections between neurons representing the associated stimuli, such that a neuron initially responding only to one stimulus of an associated pair eventually comes to respond to both. Consistent with...

  8. Hippocampal neurons respond uniquely to topographies of various sizes and shapes

    International Nuclear Information System (INIS)

    Fozdar, David Y; Chen Shaochen; Lee, Jae Young; Schmidt, Christine E

    2010-01-01

    A number of studies have investigated the behavior of neurons on microfabricated topography for the purpose of developing interfaces for use in neural engineering applications. However, there have been few studies simultaneously exploring the effects of topographies having various feature sizes and shapes on axon growth and polarization in the first 24 h. Accordingly, here we investigated the effects of arrays of lines (ridge grooves) and holes of microscale (∼2 μm) and nanoscale (∼300 nm) dimensions, patterned in quartz (SiO 2 ), on the (1) adhesion, (2) axon establishment (polarization), (3) axon length, (4) axon alignment and (5) cell morphology of rat embryonic hippocampal neurons, to study the response of the neurons to feature dimension and geometry. Neurons were analyzed using optical and scanning electron microscopy. The topographies were found to have a negligible effect on cell attachment but to cause a marked increase in axon polarization, occurring more frequently on sub-microscale features than on microscale features. Neurons were observed to form longer axons on lines than on holes and smooth surfaces; axons were either aligned parallel or perpendicular to the line features. An analysis of cell morphology indicated that the surface features impacted the morphologies of the soma, axon and growth cone. The results suggest that incorporating microscale and sub-microscale topographies on biomaterial surfaces may enhance the biomaterials' ability to modulate nerve development and regeneration.

  9. New supervised learning theory applied to cerebellar modeling for suppression of variability of saccade end points.

    Science.gov (United States)

    Fujita, Masahiko

    2013-06-01

    A new supervised learning theory is proposed for a hierarchical neural network with a single hidden layer of threshold units, which can approximate any continuous transformation, and applied to a cerebellar function to suppress the end-point variability of saccades. In motor systems, feedback control can reduce noise effects if the noise is added in a pathway from a motor center to a peripheral effector; however, it cannot reduce noise effects if the noise is generated in the motor center itself: a new control scheme is necessary for such noise. The cerebellar cortex is well known as a supervised learning system, and a novel theory of cerebellar cortical function developed in this study can explain the capability of the cerebellum to feedforwardly reduce noise effects, such as end-point variability of saccades. This theory assumes that a Golgi-granule cell system can encode the strength of a mossy fiber input as the state of neuronal activity of parallel fibers. By combining these parallel fiber signals with appropriate connection weights to produce a Purkinje cell output, an arbitrary continuous input-output relationship can be obtained. By incorporating such flexible computation and learning ability in a process of saccadic gain adaptation, a new control scheme in which the cerebellar cortex feedforwardly suppresses the end-point variability when it detects a variation in saccadic commands can be devised. Computer simulation confirmed the efficiency of such learning and showed a reduction in the variability of saccadic end points, similar to results obtained from experimental data.

  10. Human-level control through deep reinforcement learning

    Science.gov (United States)

    Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A.; Veness, Joel; Bellemare, Marc G.; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K.; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis

    2015-02-01

    The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

  11. Human-level control through deep reinforcement learning.

    Science.gov (United States)

    Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A; Veness, Joel; Bellemare, Marc G; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis

    2015-02-26

    The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

  12. Generation of Regionally Specified Neural Progenitors and Functional Neurons from Human Embryonic Stem Cells under Defined Conditions

    Directory of Open Access Journals (Sweden)

    Agnete Kirkeby

    2012-06-01

    Full Text Available To model human neural-cell-fate specification and to provide cells for regenerative therapies, we have developed a method to generate human neural progenitors and neurons from human embryonic stem cells, which recapitulates human fetal brain development. Through the addition of a small molecule that activates canonical WNT signaling, we induced rapid and efficient dose-dependent specification of regionally defined neural progenitors ranging from telencephalic forebrain to posterior hindbrain fates. Ten days after initiation of differentiation, the progenitors could be transplanted to the adult rat striatum, where they formed neuron-rich and tumor-free grafts with maintained regional specification. Cells patterned toward a ventral midbrain (VM identity generated a high proportion of authentic dopaminergic neurons after transplantation. The dopamine neurons showed morphology, projection pattern, and protein expression identical to that of human fetal VM cells grafted in parallel. VM-patterned but not forebrain-patterned neurons released dopamine and reversed motor deficits in an animal model of Parkinson's disease.

  13. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    Science.gov (United States)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  14. Rapid learning in visual cortical networks.

    Science.gov (United States)

    Wang, Ye; Dragoi, Valentin

    2015-08-26

    Although changes in brain activity during learning have been extensively examined at the single neuron level, the coding strategies employed by cell populations remain mysterious. We examined cell populations in macaque area V4 during a rapid form of perceptual learning that emerges within tens of minutes. Multiple single units and LFP responses were recorded as monkeys improved their performance in an image discrimination task. We show that the increase in behavioral performance during learning is predicted by a tight coordination of spike timing with local population activity. More spike-LFP theta synchronization is correlated with higher learning performance, while high-frequency synchronization is unrelated with changes in performance, but these changes were absent once learning had stabilized and stimuli became familiar, or in the absence of learning. These findings reveal a novel mechanism of plasticity in visual cortex by which elevated low-frequency synchronization between individual neurons and local population activity accompanies the improvement in performance during learning.

  15. Learning and Best Practices for Learning in Open-Source Software Communities

    Science.gov (United States)

    Singh, Vandana; Holt, Lila

    2013-01-01

    This research is about participants who use open-source software (OSS) discussion forums for learning. Learning in online communities of education as well as non-education-related online communities has been studied under the lens of social learning theory and situated learning for a long time. In this research, we draw parallels among these two…

  16. A Tractable Method for Describing Complex Couplings between Neurons and Population Rate.

    Science.gov (United States)

    Gardella, Christophe; Marre, Olivier; Mora, Thierry

    2016-01-01

    Neurons within a population are strongly correlated, but how to simply capture these correlations is still a matter of debate. Recent studies have shown that the activity of each cell is influenced by the population rate, defined as the summed activity of all neurons in the population. However, an explicit, tractable model for these interactions is still lacking. Here we build a probabilistic model of population activity that reproduces the firing rate of each cell, the distribution of the population rate, and the linear coupling between them. This model is tractable, meaning that its parameters can be learned in a few seconds on a standard computer even for large population recordings. We inferred our model for a population of 160 neurons in the salamander retina. In this population, single-cell firing rates depended in unexpected ways on the population rate. In particular, some cells had a preferred population rate at which they were most likely to fire. These complex dependencies could not be explained by a linear coupling between the cell and the population rate. We designed a more general, still tractable model that could fully account for these nonlinear dependencies. We thus provide a simple and computationally tractable way to learn models that reproduce the dependence of each neuron on the population rate.

  17. Dopamine, reward learning, and active inference.

    Science.gov (United States)

    FitzGerald, Thomas H B; Dolan, Raymond J; Friston, Karl

    2015-01-01

    Temporal difference learning models propose phasic dopamine signaling encodes reward prediction errors that drive learning. This is supported by studies where optogenetic stimulation of dopamine neurons can stand in lieu of actual reward. Nevertheless, a large body of data also shows that dopamine is not necessary for learning, and that dopamine depletion primarily affects task performance. We offer a resolution to this paradox based on an hypothesis that dopamine encodes the precision of beliefs about alternative actions, and thus controls the outcome-sensitivity of behavior. We extend an active inference scheme for solving Markov decision processes to include learning, and show that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning. Furthermore, simulated dopamine depletion impairs performance but spares learning, while simulated excitation of dopamine neurons drives reward learning, through aberrant inference about outcome states. Our formal approach provides a novel and parsimonious reconciliation of apparently divergent experimental findings.

  18. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    Science.gov (United States)

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  19. Acetylcholine and Olfactory Perceptual Learning

    Science.gov (United States)

    Wilson, Donald A.; Fletcher, Max L.; Sullivan, Regina M.

    2004-01-01

    Olfactory perceptual learning is a relatively long-term, learned increase in perceptual acuity, and has been described in both humans and animals. Data from recent electrophysiological studies have indicated that olfactory perceptual learning may be correlated with changes in odorant receptive fields of neurons in the olfactory bulb and piriform…

  20. Toward a Neurocentric View of Learning.

    Science.gov (United States)

    Titley, Heather K; Brunel, Nicolas; Hansel, Christian

    2017-07-05

    Synaptic plasticity (e.g., long-term potentiation [LTP]) is considered the cellular correlate of learning. Recent optogenetic studies on memory engram formation assign a critical role in learning to suprathreshold activation of neurons and their integration into active engrams ("engram cells"). Here we review evidence that ensemble integration may result from LTP but also from cell-autonomous changes in membrane excitability. We propose that synaptic plasticity determines synaptic connectivity maps, whereas intrinsic plasticity-possibly separated in time-amplifies neuronal responsiveness and acutely drives engram integration. Our proposal marks a move away from an exclusively synaptocentric toward a non-exclusive, neurocentric view of learning. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. BDNF heightens the sensitivity of motor neurons to excitotoxic insults through activation of TrkB

    Science.gov (United States)

    Hu, Peter; Kalb, Robert G.; Walton, K. D. (Principal Investigator)

    2003-01-01

    The survival promoting and neuroprotective actions of brain-derived neurotrophic factor (BDNF) are well known but under certain circumstances this growth factor can also exacerbate excitotoxic insults to neurons. Prior exploration of the receptor through which BDNF exerts this action on motor neurons deflects attention away from p75. Here we investigated the possibility that BDNF acts through the receptor tyrosine kinase, TrkB, to confer on motor neurons sensitivity to excitotoxic challenge. We blocked BDNF activation of TrkB using a dominant negative TrkB mutant or a TrkB function blocking antibody, and found that this protected motor neurons against excitotoxic insult in cultures of mixed spinal cord neurons. Addition of a function blocking antibody to BDNF to mixed spinal cord neuron cultures is also neuroprotective indicating that endogenously produced BDNF participates in vulnerability to excitotoxicity. We next examined the intracellular signaling cascades that are engaged upon TrkB activation. Previously we found that inhibition of the phosphatidylinositide-3'-kinase (PI3'K) pathway blocks BDNF-induced excitotoxic sensitivity. Here we show that expression of a constitutively active catalytic subunit of PI3'K, p110, confers excitotoxic sensitivity (ES) upon motor neurons not incubated with BDNF. Parallel studies with purified motor neurons confirm that these events are likely to be occuring specifically within motor neurons. The abrogation of BDNF's capacity to accentuate excitotoxic insults may make it a more attractive neuroprotective agent.

  2. Spatio-temporal specialization of GABAergic septo-hippocampal neurons for rhythmic network activity.

    Science.gov (United States)

    Unal, Gunes; Crump, Michael G; Viney, Tim J; Éltes, Tímea; Katona, Linda; Klausberger, Thomas; Somogyi, Peter

    2018-03-03

    Medial septal GABAergic neurons of the basal forebrain innervate the hippocampus and related cortical areas, contributing to the coordination of network activity, such as theta oscillations and sharp wave-ripple events, via a preferential innervation of GABAergic interneurons. Individual medial septal neurons display diverse activity patterns, which may be related to their termination in different cortical areas and/or to the different types of innervated interneurons. To test these hypotheses, we extracellularly recorded and juxtacellularly labeled single medial septal neurons in anesthetized rats in vivo during hippocampal theta and ripple oscillations, traced their axons to distant cortical target areas, and analyzed their postsynaptic interneurons. Medial septal GABAergic neurons exhibiting different hippocampal theta phase preferences and/or sharp wave-ripple related activity terminated in restricted hippocampal regions, and selectively targeted a limited number of interneuron types, as established on the basis of molecular markers. We demonstrate the preferential innervation of bistratified cells in CA1 and of basket cells in CA3 by individual axons. One group of septal neurons was suppressed during sharp wave-ripples, maintained their firing rate across theta and non-theta network states and mainly fired along the descending phase of CA1 theta oscillations. In contrast, neurons that were active during sharp wave-ripples increased their firing significantly during "theta" compared to "non-theta" states, with most firing during the ascending phase of theta oscillations. These results demonstrate that specialized septal GABAergic neurons contribute to the coordination of network activity through parallel, target area- and cell type-selective projections to the hippocampus.

  3. Kappe neurons, a novel population of olfactory sensory neurons

    OpenAIRE

    Ahuja, Gaurav; Nia, Shahrzad Bozorg; Zapilko, Veronika; Shiriagin, Vladimir; Kowatschew, Daniel; Oka, Yuichiro; Korsching, Sigrun I.

    2014-01-01

    Perception of olfactory stimuli is mediated by distinct populations of olfactory sensory neurons, each with a characteristic set of morphological as well as functional parameters. Beyond two large populations of ciliated and microvillous neurons, a third population, crypt neurons, has been identified in teleost and cartilaginous fishes. We report here a novel, fourth olfactory sensory neuron population in zebrafish, which we named kappe neurons for their characteristic shape. Kappe neurons ar...

  4. The neurotoxicant PCB-95 by increasing the neuronal transcriptional repressor REST down-regulates caspase-8 and increases Ripk1, Ripk3 and MLKL expression determining necroptotic neuronal death.

    Science.gov (United States)

    Guida, Natascia; Laudati, Giusy; Serani, Angelo; Mascolo, Luigi; Molinaro, Pasquale; Montuori, Paolo; Di Renzo, Gianfranco; Canzoniero, Lorella M T; Formisano, Luigi

    2017-10-15

    Our previous study showed that the environmental neurotoxicant non-dioxin-like polychlorinated biphenyl (PCB)-95 increases RE1-silencing transcription factor (REST) expression, which is related to necrosis, but not apoptosis, of neurons. Meanwhile, necroptosis is a type of a programmed necrosis that is positively regulated by receptor interacting protein kinase 1 (RIPK1), RIPK3 and mixed lineage kinase domain-like (MLKL) and negatively regulated by caspase-8. Here we evaluated whether necroptosis contributes to PCB-95-induced neuronal death through REST up-regulation. Our results demonstrated that in cortical neurons PCB-95 increased RIPK1, RIPK3, and MLKL expression and decreased caspase-8 at the gene and protein level. Furthermore, the RIPK1 inhibitor necrostatin-1 or siRNA-mediated RIPK1, RIPK3 and MLKL expression knockdown significantly reduced PCB-95-induced neuronal death. Intriguingly, PCB-95-induced increases in RIPK1, RIPK3, MLKL expression and decreases in caspase-8 expression were reversed by knockdown of REST expression with a REST-specific siRNA (siREST). Notably, in silico analysis of the rat genome identified a REST consensus sequence in the caspase-8 gene promoter (Casp8-RE1), but not the RIPK1, RIPK3 and MLKL promoters. Interestingly, in PCB-95-treated neurons, REST binding to the Casp8-RE1 sequence increased in parallel with a reduction in its promoter activity, whereas under the same experimental conditions, transfection of siREST or mutation of the Casp8-RE1 sequence blocked PCB-95-induced caspase-8 reduction. Since RIPK1, RIPK3 and MLKL rat genes showed no putative REST binding site, we assessed whether the transcription factor cAMP Responsive Element Binding Protein (CREB), which has a consensus sequence in all three genes, affected neuronal death. In neurons treated with PCB-95, CREB protein expression decreased in parallel with a reduction in binding to the RIPK1, RIPK3 and MLKL gene promoter sequence. Furthermore, CREB overexpression was

  5. Metastable states in the hierarchical Dyson model drive parallel processing in the hierarchical Hopfield network

    International Nuclear Information System (INIS)

    Agliari, Elena; Barra, Adriano; Guerra, Francesco; Galluzzi, Andrea; Tantari, Daniele; Tavani, Flavia

    2015-01-01

    In this paper, we introduce and investigate the statistical mechanics of hierarchical neural networks. First, we approach these systems à la Mattis, by thinking of the Dyson model as a single-pattern hierarchical neural network. We also discuss the stability of different retrievable states as predicted by the related self-consistencies obtained both from a mean-field bound and from a bound that bypasses the mean-field limitation. The latter is worked out by properly reabsorbing the magnetization fluctuations related to higher levels of the hierarchy into effective fields for the lower levels. Remarkably, mixing Amit's ansatz technique for selecting candidate-retrievable states with the interpolation procedure for solving for the free energy of these states, we prove that, due to gauge symmetry, the Dyson model accomplishes both serial and parallel processing. We extend this scenario to multiple stored patterns by implementing the Hebb prescription for learning within the couplings. This results in Hopfield-like networks constrained on a hierarchical topology, for which, by restricting to the low-storage regime where the number of patterns grows at its most logarithmical with the amount of neurons, we prove the existence of the thermodynamic limit for the free energy, and we give an explicit expression of its mean-field bound and of its related improved bound. We studied the resulting self-consistencies for the Mattis magnetizations, which act as order parameters, are studied and the stability of solutions is analyzed to get a picture of the overall retrieval capabilities of the system according to both mean-field and non-mean-field scenarios. Our main finding is that embedding the Hebbian rule on a hierarchical topology allows the network to accomplish both serial and parallel processing. By tuning the level of fast noise affecting it or triggering the decay of the interactions with the distance among neurons, the system may switch from sequential retrieval to

  6. Study on cognition disorder and morphologic change of neurons in hippocampus area following traumatic brain injury in rats

    Institute of Scientific and Technical Information of China (English)

    洪军; 崔建忠; 周云涛; 高俊玲

    2002-01-01

    Objective: To explore the correlation between cognition disorder and morphologic change of hippocampal neurons after traumatic brain injury (TBI).   Methods: Wistar rat models with severe TBI were made by Marmarous method. The histopathological change of the neurons in the hippocampus area were studied with hematoxylin-eosin (HE) staining and terminal deoxynucleotidyl transferase-mediated X-dUPT nick end labeling (TUNEL), respectively. The cognitive function was evaluated with the Morris water maze test.   Results: The comprehensive neuronal degeneration and necrosis could be observed in CA2-3 regions of hippocampus at 3 days after injury. Apoptotic positive neurons in CA2-4 regions of hippocampus and dentate gyrus increased in the injured group at 24 hours following TBI. They peaked at 7 days and then declined. Significant impairment of spatial learning and memory was observed after injury in the rats.   Conclusions: The rats have obvious disorders in spatial learning and memory after severe TBI. Meanwhile, delayed neuronal necrosis and apoptosis can be observed in the neurons in the hippocampus area. It suggests that delayed hippocampal cell death may contribute to the functional deficit.

  7. On Empathy: The Mirror Neuron System and Art Education

    Science.gov (United States)

    Jeffers, Carol S.

    2009-01-01

    This paper re/considers empathy and its implications for learning in the art classroom, particularly in light of relevant neuroscientific investigations of the mirror neuron system recently discovered in the human brain. These investigations reinterpret the meaning of perception, resonance, and connection, and point to the fundamental importance…

  8. A supervised learning rule for classification of spatiotemporal spike patterns.

    Science.gov (United States)

    Lilin Guo; Zhenzhong Wang; Adjouadi, Malek

    2016-08-01

    This study introduces a novel supervised algorithm for spiking neurons that take into consideration synapse delays and axonal delays associated with weights. It can be utilized for both classification and association and uses several biologically influenced properties, such as axonal and synaptic delays. This algorithm also takes into consideration spike-timing-dependent plasticity as in Remote Supervised Method (ReSuMe). This paper focuses on the classification aspect alone. Spiked neurons trained according to this proposed learning rule are capable of classifying different categories by the associated sequences of precisely timed spikes. Simulation results have shown that the proposed learning method greatly improves classification accuracy when compared to the Spike Pattern Association Neuron (SPAN) and the Tempotron learning rule.

  9. Primetime for Learning Genes.

    Science.gov (United States)

    Keifer, Joyce

    2017-02-11

    Learning genes in mature neurons are uniquely suited to respond rapidly to specific environmental stimuli. Expression of individual learning genes, therefore, requires regulatory mechanisms that have the flexibility to respond with transcriptional activation or repression to select appropriate physiological and behavioral responses. Among the mechanisms that equip genes to respond adaptively are bivalent domains. These are specific histone modifications localized to gene promoters that are characteristic of both gene activation and repression, and have been studied primarily for developmental genes in embryonic stem cells. In this review, studies of the epigenetic regulation of learning genes in neurons, particularly the brain-derived neurotrophic factor gene ( BDNF ), by methylation/demethylation and chromatin modifications in the context of learning and memory will be highlighted. Because of the unique function of learning genes in the mature brain, it is proposed that bivalent domains are a characteristic feature of the chromatin landscape surrounding their promoters. This allows them to be "poised" for rapid response to activate or repress gene expression depending on environmental stimuli.

  10. Birth of projection neurons in adult avian brain may be related to perceptual or motor learning

    International Nuclear Information System (INIS)

    Alvarez-Buylla, A.; Kirn, J.R.; Nottebohm, F.

    1990-01-01

    Projection neurons that form part of the motor pathway for song control continue to be produced and to replace older projection neurons in adult canaries and zebra finches. This is shown by combining [3H]thymidine, a cell birth marker, and fluorogold, a retrogradely transported tracer of neuronal connectivity. Species and seasonal comparisons suggest that this process is related to the acquisition of perceptual or motor memories. The ability of an adult brain to produce and replace projection neurons should influence our thinking on brain repair

  11. Prenatal Nicotine Exposure Impairs the Proliferation of Neuronal Progenitors, Leading to Fewer Glutamatergic Neurons in the Medial Prefrontal Cortex

    Science.gov (United States)

    Aoyama, Yuki; Toriumi, Kazuya; Mouri, Akihiro; Hattori, Tomoya; Ueda, Eriko; Shimato, Akane; Sakakibara, Nami; Soh, Yuka; Mamiya, Takayoshi; Nagai, Taku; Kim, Hyoung-Chun; Hiramatsu, Masayuki; Nabeshima, Toshitaka; Yamada, Kiyofumi

    2016-01-01

    Cigarette smoking during pregnancy is associated with various disabilities in the offspring such as attention deficit/hyperactivity disorder, learning disabilities, and persistent anxiety. We have reported that nicotine exposure in female mice during pregnancy, in particular from embryonic day 14 (E14) to postnatal day 0 (P0), induces long-lasting behavioral deficits in offspring. However, the mechanism by which prenatal nicotine exposure (PNE) affects neurodevelopment, resulting in behavioral deficits, has remained unclear. Here, we report that PNE disrupted the proliferation of neuronal progenitors, leading to a decrease in the progenitor pool in the ventricular and subventricular zones. In addition, using a cumulative 5-bromo-2′-deoxyuridine labeling assay, we evaluated the rate of cell cycle progression causing the impairment of neuronal progenitor proliferation, and uncovered anomalous cell cycle kinetics in mice with PNE. Accordingly, the density of glutamatergic neurons in the medial prefrontal cortex (medial PFC) was reduced, implying glutamatergic dysregulation. Mice with PNE exhibited behavioral impairments in attentional function and behavioral flexibility in adulthood, and the deficits were ameliorated by microinjection of D-cycloserine into the PFC. Collectively, our findings suggest that PNE affects the proliferation and maturation of progenitor cells to glutamatergic neuron during neurodevelopment in the medial PFC, which may be associated with cognitive deficits in the offspring. PMID:26105135

  12. Action observation and mirror neuron network: a tool for motor stroke rehabilitation.

    Science.gov (United States)

    Sale, P; Franceschini, M

    2012-06-01

    Mirror neurons are a specific class of neurons that are activated and discharge both during observation of the same or similar motor act performed by another individual and during the execution of a motor act. Different studies based on non invasive neuroelectrophysiological assessment or functional brain imaging techniques have demonstrated the presence of the mirror neuron and their mechanism in humans. Various authors have demonstrated that in the human these networks are activated when individuals learn motor actions via execution (as in traditional motor learning), imitation, observation (as in observational learning) and motor imagery. Activation of these brain areas (inferior parietal lobe and the ventral premotor cortex, as well as the caudal part of the inferior frontal gyrus [IFG]) following observation or motor imagery may thereby facilitate subsequent movement execution by directly matching the observed or imagined action to the internal simulation of that action. It is therefore believed that this multi-sensory action-observation system enables individuals to (re) learn impaired motor functions through the activation of these internal action-related representations. In humans, the mirror mechanism is also located in various brain segment: in Broca's area, which is involved in language processing and speech production and not only in centres that mediate voluntary movement, but also in cortical areas that mediate visceromotor emotion-related behaviours. On basis of this finding, during the last 10 years various studies were carry out regarding the clinical use of action observation for motor rehabilitation of sub-acute and chronic stroke patients.

  13. Dopamine, reward learning, and active inference

    Directory of Open Access Journals (Sweden)

    Thomas eFitzgerald

    2015-11-01

    Full Text Available Temporal difference learning models propose phasic dopamine signalling encodes reward prediction errors that drive learning. This is supported by studies where optogenetic stimulation of dopamine neurons can stand in lieu of actual reward. Nevertheless, a large body of data also shows that dopamine is not necessary for learning, and that dopamine depletion primarily affects task performance. We offer a resolution to this paradox based on an hypothesis that dopamine encodes the precision of beliefs about alternative actions, and thus controls the outcome-sensitivity of behaviour. We extend an active inference scheme for solving Markov decision processes to include learning, and show that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning. Furthermore, simulated dopamine depletion impairs performance but spares learning, while simulated excitation of dopamine neurons drives reward learning, through aberrant inference about outcome states. Our formal approach provides a novel and parsimonious reconciliation of apparently divergent experimental findings.

  14. Neuron-glia metabolic coupling and plasticity

    OpenAIRE

    Magistretti PJ

    2011-01-01

    Abstract The focus of the current research projects in my laboratory revolves around the question of metabolic plasticity of neuron glia coupling. Our hypothesis is that behavioural conditions such as for example learning or the sleep wake cycle in which synaptic plasticity is well documented or during specific pathological conditions are accompanied by changes in the regulation of energy metabolism of astrocytes. We have indeed observed that the 'metabolic profile' of astrocytes is modified...

  15. Sweet taste and nutrient value subdivide rewarding dopaminergic neurons in Drosophila.

    Science.gov (United States)

    Huetteroth, Wolf; Perisse, Emmanuel; Lin, Suewei; Klappenbach, Martín; Burke, Christopher; Waddell, Scott

    2015-03-16

    Dopaminergic neurons provide reward learning signals in mammals and insects [1-4]. Recent work in Drosophila has demonstrated that water-reinforcing dopaminergic neurons are different to those for nutritious sugars [5]. Here, we tested whether the sweet taste and nutrient properties of sugar reinforcement further subdivide the fly reward system. We found that dopaminergic neurons expressing the OAMB octopamine receptor [6] specifically convey the short-term reinforcing effects of sweet taste [4]. These dopaminergic neurons project to the β'2 and γ4 regions of the mushroom body lobes. In contrast, nutrient-dependent long-term memory requires different dopaminergic neurons that project to the γ5b regions, and it can be artificially reinforced by those projecting to the β lobe and adjacent α1 region. Surprisingly, whereas artificial implantation and expression of short-term memory occur in satiated flies, formation and expression of artificial long-term memory require flies to be hungry. These studies suggest that short-term and long-term sugar memories have different physiological constraints. They also demonstrate further functional heterogeneity within the rewarding dopaminergic neuron population. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Environmental enrichment protects spatial learning and hippocampal neurons from the long-lasting effects of protein malnutrition early in life.

    Science.gov (United States)

    Soares, Roberto O; Horiquini-Barbosa, Everton; Almeida, Sebastião S; Lachat, João-José

    2017-09-29

    As early protein malnutrition has a critically long-lasting impact on the hippocampal formation and its role in learning and memory, and environmental enrichment has demonstrated great success in ameliorating functional deficits, here we ask whether exposure to an enriched environment could be employed to prevent spatial memory impairment and neuroanatomical changes in the hippocampus of adult rats maintained on a protein deficient diet during brain development (P0-P35). To elucidate the protective effects of environmental enrichment, we used the Morris water task and neuroanatomical analysis to determine whether changes in spatial memory and number and size of CA1 neurons differed significantly among groups. Protein malnutrition and environmental enrichment during brain development had significant effects on the spatial memory and hippocampal anatomy of adult rats. Malnourished but non-enriched rats (MN) required more time to find the hidden platform than well-nourished but non-enriched rats (WN). Malnourished but enriched rats (ME) performed better than the MN and similarly to the WN rats. There was no difference between well-nourished but non-enriched and enriched rats (WE). Anatomically, fewer CA1 neurons were found in the hippocampus of MN rats than in those of WN rats. However, it was also observed that ME and WN rats retained a similar number of neurons. These results suggest that environmental enrichment during brain development alters cognitive task performance and hippocampal neuroanatomy in a manner that is neuroprotective against malnutrition-induced brain injury. These results could have significant implications for malnourished infants expected to be at risk of disturbed brain development. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Vasculo-Neuronal Coupling: Retrograde Vascular Communication to Brain Neurons.

    Science.gov (United States)

    Kim, Ki Jung; Ramiro Diaz, Juan; Iddings, Jennifer A; Filosa, Jessica A

    2016-12-14

    Continuous cerebral blood flow is essential for neuronal survival, but whether vascular tone influences resting neuronal function is not known. Using a multidisciplinary approach in both rat and mice brain slices, we determined whether flow/pressure-evoked increases or decreases in parenchymal arteriole vascular tone, which result in arteriole constriction and dilation, respectively, altered resting cortical pyramidal neuron activity. We present evidence for intercellular communication in the brain involving a flow of information from vessel to astrocyte to neuron, a direction opposite to that of classic neurovascular coupling and referred to here as vasculo-neuronal coupling (VNC). Flow/pressure increases within parenchymal arterioles increased vascular tone and simultaneously decreased resting pyramidal neuron firing activity. On the other hand, flow/pressure decreases evoke parenchymal arteriole dilation and increased resting pyramidal neuron firing activity. In GLAST-CreERT2; R26-lsl-GCaMP3 mice, we demonstrate that increased parenchymal arteriole tone significantly increased intracellular calcium in perivascular astrocyte processes, the onset of astrocyte calcium changes preceded the inhibition of cortical pyramidal neuronal firing activity. During increases in parenchymal arteriole tone, the pyramidal neuron response was unaffected by blockers of nitric oxide, GABA A , glutamate, or ecto-ATPase. However, VNC was abrogated by TRPV4 channel, GABA B , as well as an adenosine A 1 receptor blocker. Differently to pyramidal neuron responses, increases in flow/pressure within parenchymal arterioles increased the firing activity of a subtype of interneuron. Together, these data suggest that VNC is a complex constitutive active process that enables neurons to efficiently adjust their resting activity according to brain perfusion levels, thus safeguarding cellular homeostasis by preventing mismatches between energy supply and demand. We present evidence for vessel-to-neuron

  18. Logic Learning in Hopfield Networks

    OpenAIRE

    Sathasivam, Saratha; Abdullah, Wan Ahmad Tajuddin Wan

    2008-01-01

    Synaptic weights for neurons in logic programming can be calculated either by using Hebbian learning or by Wan Abdullah's method. In other words, Hebbian learning for governing events corresponding to some respective program clauses is equivalent with learning using Wan Abdullah's method for the same respective program clauses. In this paper we will evaluate experimentally the equivalence between these two types of learning through computer simulations.

  19. Effectiveness of telemedicine and distance learning applications for patients with chronic heart failure. A protocol for prospective parallel group non-randomised open label study

    OpenAIRE

    Vanagas, Giedrius; Umbrasienė, Jelena; Šlapikas, Rimvydas

    2012-01-01

    Introduction Chronic heart failure in Baltic Sea Region is responsible for more hospitalisations than all forms of cancer combined and is one of the leading causes of hospitalisations in elderly patients. Frequent hospitalisations, along with other direct and indirect costs, place financial burden on healthcare systems. We aim to test the hypothesis that telemedicine and distance learning applications is superior to the current standard of home care. Methods and analysis Prospective parallel ...

  20. Modeling the Development of Goal-Specificity in Mirror Neurons.

    Science.gov (United States)

    Thill, Serge; Svensson, Henrik; Ziemke, Tom

    2011-12-01

    Neurophysiological studies have shown that parietal mirror neurons encode not only actions but also the goal of these actions. Although some mirror neurons will fire whenever a certain action is perceived (goal-independently), most will only fire if the motion is perceived as part of an action with a specific goal. This result is important for the action-understanding hypothesis as it provides a potential neurological basis for such a cognitive ability. It is also relevant for the design of artificial cognitive systems, in particular robotic systems that rely on computational models of the mirror system in their interaction with other agents. Yet, to date, no computational model has explicitly addressed the mechanisms that give rise to both goal-specific and goal-independent parietal mirror neurons. In the present paper, we present a computational model based on a self-organizing map, which receives artificial inputs representing information about both the observed or executed actions and the context in which they were executed. We show that the map develops a biologically plausible organization in which goal-specific mirror neurons emerge. We further show that the fundamental cause for both the appearance and the number of goal-specific neurons can be found in geometric relationships between the different inputs to the map. The results are important to the action-understanding hypothesis as they provide a mechanism for the emergence of goal-specific parietal mirror neurons and lead to a number of predictions: (1) Learning of new goals may mostly reassign existing goal-specific neurons rather than recruit new ones; (2) input differences between executed and observed actions can explain observed corresponding differences in the number of goal-specific neurons; and (3) the percentage of goal-specific neurons may differ between motion primitives.

  1. Hypoglycemia: Role of Hypothalamic Glucose-Inhibited (GI) Neurons in Detection and Correction.

    Science.gov (United States)

    Zhou, Chunxue; Teegala, Suraj B; Khan, Bilal A; Gonzalez, Christina; Routh, Vanessa H

    2018-01-01

    Hypoglycemia is a profound threat to the brain since glucose is its primary fuel. As a result, glucose sensors are widely located in the central nervous system and periphery. In this perspective we will focus on the role of hypothalamic glucose-inhibited (GI) neurons in sensing and correcting hypoglycemia. In particular, we will discuss GI neurons in the ventromedial hypothalamus (VMH) which express neuronal nitric oxide synthase (nNOS) and in the perifornical hypothalamus (PFH) which express orexin. The ability of VMH nNOS-GI neurons to depolarize in low glucose closely parallels the hormonal response to hypoglycemia which stimulates gluconeogenesis. We have found that nitric oxide (NO) production in low glucose is dependent on oxidative status. In this perspective we will discuss the potential relevance of our work showing that enhancing the glutathione antioxidant system prevents hypoglycemia associated autonomic failure (HAAF) in non-diabetic rats whereas VMH overexpression of the thioredoxin antioxidant system restores hypoglycemia counterregulation in rats with type 1 diabetes.We will also address the potential role of the orexin-GI neurons in the arousal response needed for hypoglycemia awareness which leads to behavioral correction (e.g., food intake, glucose administration). The potential relationship between the hypothalamic sensors and the neurocircuitry in the hindbrain and portal mesenteric vein which is critical for hypoglycemia correction will then be discussed.

  2. Hypoglycemia: Role of Hypothalamic Glucose-Inhibited (GI Neurons in Detection and Correction

    Directory of Open Access Journals (Sweden)

    Chunxue Zhou

    2018-03-01

    Full Text Available Hypoglycemia is a profound threat to the brain since glucose is its primary fuel. As a result, glucose sensors are widely located in the central nervous system and periphery. In this perspective we will focus on the role of hypothalamic glucose-inhibited (GI neurons in sensing and correcting hypoglycemia. In particular, we will discuss GI neurons in the ventromedial hypothalamus (VMH which express neuronal nitric oxide synthase (nNOS and in the perifornical hypothalamus (PFH which express orexin. The ability of VMH nNOS-GI neurons to depolarize in low glucose closely parallels the hormonal response to hypoglycemia which stimulates gluconeogenesis. We have found that nitric oxide (NO production in low glucose is dependent on oxidative status. In this perspective we will discuss the potential relevance of our work showing that enhancing the glutathione antioxidant system prevents hypoglycemia associated autonomic failure (HAAF in non-diabetic rats whereas VMH overexpression of the thioredoxin antioxidant system restores hypoglycemia counterregulation in rats with type 1 diabetes.We will also address the potential role of the orexin-GI neurons in the arousal response needed for hypoglycemia awareness which leads to behavioral correction (e.g., food intake, glucose administration. The potential relationship between the hypothalamic sensors and the neurocircuitry in the hindbrain and portal mesenteric vein which is critical for hypoglycemia correction will then be discussed.

  3. Purines and Neuronal Excitability: Links to the Ketogenic Diet

    Science.gov (United States)

    Masino, SA; Kawamura, M; Ruskin, DN; Geiger, JD; Boison, D

    2011-01-01

    ATP and adenosine are purines that play dual roles in cell metabolism and neuronal signaling. Acting at the A1 receptor (A1R) subtype, adenosine acts directly on neurons to inhibit excitability and is a powerful endogenous neuroprotective and anticonvulsant molecule. Previous research showed an increase in ATP and other cell energy parameters when an animal is administered a ketogenic diet, an established metabolic therapy to reduce epileptic seizures, but the relationship among purines, neuronal excitability and the ketogenic diet was unclear. Recent work in vivo and in vitro tested the specific hypothesis that adenosine acting at A1Rs is a key mechanism underlying the success of ketogenic diet therapy and yielded direct evidence linking A1Rs to the antiepileptic effects of a ketogenic diet. Specifically, an in vitro mimic of a ketogenic diet revealed an A1R-dependent metabolic autocrine hyperpolarization of hippocampal neurons. In parallel, applying the ketogenic diet in vivo to transgenic mouse models with spontaneous electrographic seizures revealed that intact A1Rs are necessary for the seizure-suppressing effects of the diet. This is the first direct in vivo evidence linking A1Rs to the antiepileptic effects of a ketogenic diet. Other predictions of the relationship between purines and the ketogenic diet are discussed. Taken together, recent research on the role of purines may offer new opportunities for metabolic therapy and insight into its underlying mechanisms. PMID:21880467

  4. Interactions between Brainstem Noradrenergic Neurons and the Nucleus Accumbens Shell in Modulating Memory for Emotionally Arousing Events

    Science.gov (United States)

    Kerfoot, Erin C.; Williams, Cedric L.

    2011-01-01

    The nucleus accumbens shell (NAC) receives axons containing dopamine-[beta]-hydroxylase that originate from brainstem neurons in the nucleus of the solitary tract (NTS). Recent findings show that memory enhancement produced by stimulating NTS neurons after learning may involve interactions with the NAC. However, it is unclear whether these…

  5. Asymmetric cell division and Notch signaling specify dopaminergic neurons in Drosophila.

    Directory of Open Access Journals (Sweden)

    Murni Tio

    Full Text Available In Drosophila, dopaminergic (DA neurons can be found from mid embryonic stages of development till adulthood. Despite their functional involvement in learning and memory, not much is known about the developmental as well as molecular mechanisms involved in the events of DA neuronal specification, differentiation and maturation. In this report we demonstrate that most larval DA neurons are generated during embryonic development. Furthermore, we show that loss of function (l-o-f mutations of genes of the apical complex proteins in the asymmetric cell division (ACD machinery, such as inscuteable and bazooka result in supernumerary DA neurons, whereas l-o-f mutations of genes of the basal complex proteins such as numb result in loss or reduction of DA neurons. In addition, when Notch signaling is reduced or abolished, additional DA neurons are formed and conversely, when Notch signaling is activated, less DA neurons are generated. Our data demonstrate that both ACD and Notch signaling are crucial mechanisms for DA neuronal specification. We propose a model in which ACD results in differential Notch activation in direct siblings and in this context Notch acts as a repressor for DA neuronal specification in the sibling that receives active Notch signaling. Our study provides the first link of ACD and Notch signaling in the specification of a neurotransmitter phenotype in Drosophila. Given the high degree of conservation between Drosophila and vertebrate systems, this study could be of significance to mechanisms of DA neuronal differentiation not limited to flies.

  6. Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks.

    Science.gov (United States)

    Gardner, Brian; Sporea, Ioana; Grüning, André

    2015-12-01

    Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we examine how multilayered networks of spiking neurons can learn to encode for input patterns using a fully temporal coding scheme. To this end, we introduce a new supervised learning rule, MultilayerSpiker, that can train spiking networks containing hidden layer neurons to perform transformations between spatiotemporal input and output spike patterns. The performance of the proposed learning rule is demonstrated in terms of the number of pattern mappings it can learn, the complexity of network structures it can be used on, and its classification accuracy when using multispike-based encodings. In particular, the learning rule displays robustness against input noise and can generalize well on an example data set. Our approach contributes to both a systematic understanding of how computations might take place in the nervous system and a learning rule that displays strong technical capability.

  7. mGluR5 ablation in cortical glutamatergic neurons increases novelty-induced locomotion.

    Directory of Open Access Journals (Sweden)

    Chris P Jew

    Full Text Available The group I metabotropic glutamate receptor 5 (mGluR5 has been implicated in the pathology of various neurological disorders including schizophrenia, ADHD, and autism. mGluR5-dependent synaptic plasticity has been described at a variety of neural connections and its signaling has been implicated in several behaviors. These behaviors include locomotor reactivity to novel environment, sensorimotor gating, anxiety, and cognition. mGluR5 is expressed in glutamatergic neurons, inhibitory neurons, and glia in various brain regions. In this study, we show that deleting mGluR5 expression only in principal cortical neurons leads to defective cannabinoid receptor 1 (CB1R dependent synaptic plasticity in the prefrontal cortex. These cortical glutamatergic mGluR5 knockout mice exhibit increased novelty-induced locomotion, and their locomotion can be further enhanced by treatment with the psychostimulant methylphenidate. Despite a modest reduction in repetitive behaviors, cortical glutamatergic mGluR5 knockout mice are normal in sensorimotor gating, anxiety, motor balance/learning and fear conditioning behaviors. These results show that mGluR5 signaling in cortical glutamatergic neurons is required for precisely modulating locomotor reactivity to a novel environment but not for sensorimotor gating, anxiety, motor coordination, several forms of learning or social interactions.

  8. Versatile Networks of Simulated Spiking Neurons Displaying Winner-Take-All Behavior

    Directory of Open Access Journals (Sweden)

    Yanqing eChen

    2013-03-01

    Full Text Available We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS. In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid Brain-Based-Device (BBD under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.

  9. Versatile networks of simulated spiking neurons displaying winner-take-all behavior.

    Science.gov (United States)

    Chen, Yanqing; McKinstry, Jeffrey L; Edelman, Gerald M

    2013-01-01

    We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA) behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS). In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid brain-based-device (BBD) under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.

  10. Bright light exposure reduces TH-positive dopamine neurons: implications of light pollution in Parkinson's disease epidemiology.

    Science.gov (United States)

    Romeo, Stefania; Viaggi, Cristina; Di Camillo, Daniela; Willis, Allison W; Lozzi, Luca; Rocchi, Cristina; Capannolo, Marta; Aloisi, Gabriella; Vaglini, Francesca; Maccarone, Rita; Caleo, Matteo; Missale, Cristina; Racette, Brad A; Corsini, Giovanni U; Maggio, Roberto

    2013-01-01

    This study explores the effect of continuous exposure to bright light on neuromelanin formation and dopamine neuron survival in the substantia nigra. Twenty-one days after birth, Sprague-Dawley albino rats were divided into groups and raised under different conditions of light exposure. At the end of the irradiation period, rats were sacrificed and assayed for neuromelanin formation and number of tyrosine hydroxylase (TH)-positive neurons in the substantia nigra. The rats exposed to bright light for 20 days or 90 days showed a relatively greater number of neuromelanin-positive neurons. Surprisingly, TH-positive neurons decreased progressively in the substantia nigra reaching a significant 29% reduction after 90 days of continuous bright light exposure. This decrease was paralleled by a diminution of dopamine and its metabolite in the striatum. Remarkably, in preliminary analysis that accounted for population density, the age and race adjusted Parkinson's disease prevalence significantly correlated with average satellite-observed sky light pollution.

  11. [Mirror neurons: from anatomy to pathophysiological and therapeutic implications].

    Science.gov (United States)

    Mathon, B

    2013-04-01

    Mirror neurons are a special class of neurons discovered in the 1990s. They respond when we perform an action and also when we see someone else perform that action. They play a role in the pathophysiology of some neuropsychiatric diseases. Mirror neurons have been identified in humans: in Broca's area and the inferior parietal cortex. Their responses are qualitative and selective depending on the observed action. Emotions (including disgust) and empathy seem to operate according to a mirror mechanism. Indeed, the mirror system allows us to encode the sensory experience and to simulate the emotional state of others. This results in our improved identification of the emotions in others. Additionally, mirror neurons can encode an observed action in motor stimuli and allow its reproduction; thus, they are involved in imitation and learning. Current studies are assessing the role of mirror neurons in the pathopysiology of social-behavior disorders, including autism and schizophrenia. Understanding this mirror system will allow us to develop psychotherapy practices based on empathic resonance between the patient and the therapist. Also, some authors report that a passive rehabilitation technique, based on stimulation of the mirror-neuron system, has a beneficial effect in the treatment of patients with post-stroke motor deficits. Mirror neurons are an anatomical entity that enables improved understanding of behavior and emotions, and serves as a base for developing new cognitive therapies. Additional studies are needed to clarify the exact role of this neuronal system in social cognition and its role in the development of some neuropsychiatric diseases. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  12. Visual motion-sensitive neurons in the bumblebee brain convey information about landmarks during a navigational task

    Directory of Open Access Journals (Sweden)

    Marcel eMertes

    2014-09-01

    Full Text Available Bees use visual memories to find the spatial location of previously learnt food sites. Characteristic learning flights help acquiring these memories at newly discovered foraging locations where landmarks - salient objects in the vicinity of the goal location - can play an important role in guiding the animal’s homing behavior. Although behavioral experiments have shown that bees can use a variety of visual cues to distinguish objects as landmarks, the question of how landmark features are encoded by the visual system is still open. Recently, it could be shown that motion cues are sufficient to allow bees localizing their goal using landmarks that can hardly be discriminated from the background texture. Here, we tested the hypothesis that motion sensitive neurons in the bee’s visual pathway provide information about such landmarks during a learning flight and might, thus, play a role for goal localization. We tracked learning flights of free-flying bumblebees (Bombus terrestris in an arena with distinct visual landmarks, reconstructed the visual input during these flights, and replayed ego-perspective movies to tethered bumblebees while recording the activity of direction-selective wide-field neurons in their optic lobe. By comparing neuronal responses during a typical learning flight and targeted modifications of landmark properties in this movie we demonstrate that these objects are indeed represented in the bee’s visual motion pathway. We find that object-induced responses vary little with object texture, which is in agreement with behavioral evidence. These neurons thus convey information about landmark properties that are useful for view-based homing.

  13. Turbofan engine diagnostics neuron network size optimization method which takes into account overlaerning effect

    Directory of Open Access Journals (Sweden)

    О.С. Якушенко

    2010-01-01

    Full Text Available  The article is devoted to the problem of gas turbine engine (GTE technical state class automatic recognition with operation parameters by neuron networks. The one of main problems for creation the neuron networks is determination of their optimal structures size (amount of layers in network and count of neurons in each layer.The method of neuron network size optimization intended for classification of GTE technical state is considered in the article. Optimization is cared out with taking into account of overlearning effect possibility when a learning network loses property of generalization and begins strictly describing educational data set. To determinate a moment when overlearning effect is appeared in learning neuron network the method  of three data sets is used. The method is based on the comparison of recognition quality parameters changes which were calculated during recognition of educational and control data sets. As the moment when network overlearning effect is appeared the moment when control data set recognition quality begins deteriorating but educational data set recognition quality continues still improving is used. To determinate this moment learning process periodically is terminated and simulation of network with education and control data sets is fulfilled. The optimization of two-, three- and four-layer networks is conducted and some results of optimization are shown. Also the extended educational set is created and shown. The set describes 16 GTE technical state classes and each class is represented with 200 points (200 possible technical state class realizations instead of 20 points using in the former articles. It was done to increase representativeness of data set.In the article the algorithm of optimization is considered and some results which were obtained with it are shown. The results of experiments were analyzed to determinate most optimal neuron network structure. This structure provides most high-quality GTE

  14. CXCL12-mediated feedback from granule neurons regulates generation and positioning of new neurons in the dentate gyrus.

    Science.gov (United States)

    Abe, Philipp; Wüst, Hannah M; Arnold, Sebastian J; van de Pavert, Serge A; Stumm, Ralf

    2018-03-14

    Adult hippocampal neurogenesis is implicated in learning and memory processing. It is tightly controlled at several levels including progenitor proliferation as well as migration, differentiation and integration of new neurons. Hippocampal progenitors and immature neurons reside in the subgranular zone (SGZ) and are equipped with the CXCL12-receptor CXCR4 which contributes to defining the SGZ as neurogenic niche. The atypical CXCL12-receptor CXCR7 functions primarily by sequestering extracellular CXCL12 but whether CXCR7 is involved in adult neurogenesis has not been assessed. We report that granule neurons (GN) upregulate CXCL12 and CXCR7 during dentate gyrus maturation in the second postnatal week. To test whether GN-derived CXCL12 regulates neurogenesis and if neuronal CXCR7 receptors influence this process, we conditionally deleted Cxcl12 and Cxcr7 from the granule cell layer. Cxcl12 deletion resulted in lower numbers, increased dispersion and abnormal dendritic growth of immature GN and reduced neurogenesis. Cxcr7 ablation caused an increase in progenitor proliferation and progenitor numbers and reduced dispersion of immature GN. Thus, we provide a new mechanism where CXCL12-signals from GN prevent dispersion and support maturation of newborn GN. CXCR7 receptors of GN modulate the CXCL12-mediated feedback from GN to the neurogenic niche. © 2018 Wiley Periodicals, Inc.

  15. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition.

    Directory of Open Access Journals (Sweden)

    Johannes Bill

    Full Text Available During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.

  16. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition

    Science.gov (United States)

    Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert

    2015-01-01

    During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input. PMID:26284370

  17. Learned Tactics for Asset Allocation

    Science.gov (United States)

    2013-06-01

    neuron within a plane’s movement radius is where the plane will move in the next hour. This process is repeated for each timestep to produce the patrol... octopus arm with a variable number of segments. In R. Schaefer, C. Cotta, J. Ko lodziej, and G. Rudolph, editors, Parallel Problem Solving from Nature

  18. Dissecting Cell-Type Composition and Activity-Dependent Transcriptional State in Mammalian Brains by Massively Parallel Single-Nucleus RNA-Seq.

    Science.gov (United States)

    Hu, Peng; Fabyanic, Emily; Kwon, Deborah Y; Tang, Sheng; Zhou, Zhaolan; Wu, Hao

    2017-12-07

    Massively parallel single-cell RNA sequencing can precisely resolve cellular diversity in a high-throughput manner at low cost, but unbiased isolation of intact single cells from complex tissues such as adult mammalian brains is challenging. Here, we integrate sucrose-gradient-assisted purification of nuclei with droplet microfluidics to develop a highly scalable single-nucleus RNA-seq approach (sNucDrop-seq), which is free of enzymatic dissociation and nucleus sorting. By profiling ∼18,000 nuclei isolated from cortical tissues of adult mice, we demonstrate that sNucDrop-seq not only accurately reveals neuronal and non-neuronal subtype composition with high sensitivity but also enables in-depth analysis of transient transcriptional states driven by neuronal activity, at single-cell resolution, in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. How attention can create synaptic tags for the learning of working memories in sequential tasks.

    Directory of Open Access Journals (Sweden)

    Jaldert O Rombouts

    2015-03-01

    Full Text Available Intelligence is our ability to learn appropriate responses to new stimuli and situations. Neurons in association cortex are thought to be essential for this ability. During learning these neurons become tuned to relevant features and start to represent them with persistent activity during memory delays. This learning process is not well understood. Here we develop a biologically plausible learning scheme that explains how trial-and-error learning induces neuronal selectivity and working memory representations for task-relevant information. We propose that the response selection stage sends attentional feedback signals to earlier processing levels, forming synaptic tags at those connections responsible for the stimulus-response mapping. Globally released neuromodulators then interact with tagged synapses to determine their plasticity. The resulting learning rule endows neural networks with the capacity to create new working memory representations of task relevant information as persistent activity. It is remarkably generic: it explains how association neurons learn to store task-relevant information for linear as well as non-linear stimulus-response mappings, how they become tuned to category boundaries or analog variables, depending on the task demands, and how they learn to integrate probabilistic evidence for perceptual decisions.

  20. Single-cell axotomy of cultured hippocampal neurons integrated in neuronal circuits.

    Science.gov (United States)

    Gomis-Rüth, Susana; Stiess, Michael; Wierenga, Corette J; Meyn, Liane; Bradke, Frank

    2014-05-01

    An understanding of the molecular mechanisms of axon regeneration after injury is key for the development of potential therapies. Single-cell axotomy of dissociated neurons enables the study of the intrinsic regenerative capacities of injured axons. This protocol describes how to perform single-cell axotomy on dissociated hippocampal neurons containing synapses. Furthermore, to axotomize hippocampal neurons integrated in neuronal circuits, we describe how to set up coculture with a few fluorescently labeled neurons. This approach allows axotomy of single cells in a complex neuronal network and the observation of morphological and molecular changes during axon regeneration. Thus, single-cell axotomy of mature neurons is a valuable tool for gaining insights into cell intrinsic axon regeneration and the plasticity of neuronal polarity of mature neurons. Dissociation of the hippocampus and plating of hippocampal neurons takes ∼2 h. Neurons are then left to grow for 2 weeks, during which time they integrate into neuronal circuits. Subsequent axotomy takes 10 min per neuron and further imaging takes 10 min per neuron.

  1. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  2. Production and survival of projection neurons in a forebrain vocal center of adult male canaries

    International Nuclear Information System (INIS)

    Kirn, J.R.; Alvarez-Buylla, A.; Nottebohm, F.

    1991-01-01

    Neurons are produced in the adult canary telencephalon. Many of these cells are incorporated into the high vocal center (nucleus HVC), which participates in the control of learned song. In the present work, 3H-thymidine and fluorogold were employed to follow the differentiation and survival of HVC neurons born in adulthood. We found that many HVC neurons born in September grow long axons to the robust nucleus of the archistriatum (nucleus RA) and thus become part of the efferent pathway for song control. Many of these new neurons have already established their connections with RA by 30 d after their birth. By 240 d, 75-80% of the September-born HVC neurons project to RA. Most of these new projection neurons survive at least 8 months. The longevity of HVC neurons born in September suggests that these cells remain part of the vocal control circuit long enough to participate in the yearly renewal of the song repertoire

  3. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    Directory of Open Access Journals (Sweden)

    Jiayi Wu

    Full Text Available Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM. We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  4. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    Science.gov (United States)

    Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong

    2017-01-01

    Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  5. [3H]acetylcholine synthesis in cultured ciliary ganglion neurons: effects of myotube membranes

    International Nuclear Information System (INIS)

    Gray, D.B.; Tuttle, J.B.

    1987-01-01

    Avian ciliary ganglion neurons in cell culture were examined for the capacity to synthesize acetylcholine (ACh) from the exogenously supplied precursor, choline. Relevant kinetic parameters of the ACh synthetic system in cultured neurons were found to be virtually the same as those of the ganglionic terminals in the intact iris. Neurons were cultured in the presence of and allowed to innervate pectoral muscle; this results in an capacity for ACh synthesis. In particular, the ability to increase ACh synthesis upon demand after stimulation is affected by interaction with the target. This effect is shown to be an acceleration of the maturation of the cultured neurons. Lysed and washed membrane remnants of the muscle target were able to duplicate, in part, this effect of live target tissue on neuronal transmitter metabolism. Culture medium conditioned by muscle, and by the membrane remnants of muscle, was without significant effect. Thus, substances secreted into the medium do not play a major role in this interaction. Neurons cultured with either muscle or muscle membrane remnants formed large, elongate structures on the target membrane surface. These were not seen in the absence of the target at the times examined. This morphological difference in terminal-like structures may parallel the developmental increases in size and vesicular content of ciliary ganglion nerve terminals in the chick iris, and may relate to the increased ACh synthetic activity. The results suggest that direct contact with an appropriate target membrane has a profound, retrograde influence upon neuronal metabolic and morphological maturation

  6. Learning in the machine: The symmetries of the deep learning channel.

    Science.gov (United States)

    Baldi, Pierre; Sadowski, Peter; Lu, Zhiqin

    2017-11-01

    In a physical neural system, learning rules must be local both in space and time. In order for learning to occur, non-local information must be communicated to the deep synapses through a communication channel, the deep learning channel. We identify several possible architectures for this learning channel (Bidirectional, Conjoined, Twin, Distinct) and six symmetry challenges: (1) symmetry of architectures; (2) symmetry of weights; (3) symmetry of neurons; (4) symmetry of derivatives; (5) symmetry of processing; and (6) symmetry of learning rules. Random backpropagation (RBP) addresses the second and third symmetry, and some of its variations, such as skipped RBP (SRBP) address the first and the fourth symmetry. Here we address the last two desirable symmetries showing through simulations that they can be achieved and that the learning channel is particularly robust to symmetry variations. Specifically, random backpropagation and its variations can be performed with the same non-linear neurons used in the main input-output forward channel, and the connections in the learning channel can be adapted using the same algorithm used in the forward channel, removing the need for any specialized hardware in the learning channel. Finally, we provide mathematical results in simple cases showing that the learning equations in the forward and backward channels converge to fixed points, for almost any initial conditions. In symmetric architectures, if the weights in both channels are small at initialization, adaptation in both channels leads to weights that are essentially symmetric during and after learning. Biological connections are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Random synaptic feedback weights support error backpropagation for deep learning

    Science.gov (United States)

    Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.

    2016-01-01

    The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning. PMID:27824044

  8. Acquisition, extinction, and recall of opiate reward memory are signaled by dynamic neuronal activity patterns in the prefrontal cortex.

    Science.gov (United States)

    Sun, Ninglei; Chi, Ning; Lauzon, Nicole; Bishop, Stephanie; Tan, Huibing; Laviolette, Steven R

    2011-12-01

    The medial prefrontal cortex (mPFC) comprises an important component in the neural circuitry underlying drug-related associative learning and memory processing. Neuronal activation within mPFC circuits is correlated with the recall of opiate-related drug-taking experiences in both humans and other animals. Using an unbiased associative place conditioning procedure, we recorded mPFC neuronal populations during the acquisition, recall, and extinction phases of morphine-related associative learning and memory. Our analyses revealed that mPFC neurons show increased activity both in terms of tonic and phasic activity patterns during the acquisition phase of opiate reward-related memory and demonstrate stimulus-locked associative activity changes in real time, during the recall of opiate reward memories. Interestingly, mPFC neuronal populations demonstrated divergent patterns of bursting activity during the acquisition versus recall phases of newly acquired opiate reward memory, versus the extinction of these memories, with strongly increased bursting during the recall of an extinction memory and no associative bursting during the recall of a newly acquired opiate reward memory. Our results demonstrate that neurons within the mPFC are involved in both the acquisition, recall, and extinction of opiate-related reward memories, showing unique patterns of tonic and phasic activity patterns during these separate components of the opiate-related reward learning and memory recall.

  9. Learning to learn - intrinsic plasticity as a metaplasticity mechanism for memory formation.

    Science.gov (United States)

    Sehgal, Megha; Song, Chenghui; Ehlers, Vanessa L; Moyer, James R

    2013-10-01

    "Use it or lose it" is a popular adage often associated with use-dependent enhancement of cognitive abilities. Much research has focused on understanding exactly how the brain changes as a function of experience. Such experience-dependent plasticity involves both structural and functional alterations that contribute to adaptive behaviors, such as learning and memory, as well as maladaptive behaviors, including anxiety disorders, phobias, and posttraumatic stress disorder. With the advancing age of our population, understanding how use-dependent plasticity changes across the lifespan may also help to promote healthy brain aging. A common misconception is that such experience-dependent plasticity (e.g., associative learning) is synonymous with synaptic plasticity. Other forms of plasticity also play a critical role in shaping adaptive changes within the nervous system, including intrinsic plasticity - a change in the intrinsic excitability of a neuron. Intrinsic plasticity can result from a change in the number, distribution or activity of various ion channels located throughout the neuron. Here, we review evidence that intrinsic plasticity is an important and evolutionarily conserved neural correlate of learning. Intrinsic plasticity acts as a metaplasticity mechanism by lowering the threshold for synaptic changes. Thus, learning-related intrinsic changes can facilitate future synaptic plasticity and learning. Such intrinsic changes can impact the allocation of a memory trace within a brain structure, and when compromised, can contribute to cognitive decline during the aging process. This unique role of intrinsic excitability can provide insight into how memories are formed and, more interestingly, how neurons that participate in a memory trace are selected. Most importantly, modulation of intrinsic excitability can allow for regulation of learning ability - this can prevent or provide treatment for cognitive decline not only in patients with clinical disorders but

  10. Central serotonergic neurons activate and recruit thermogenic brown and beige fat and regulate glucose and lipid homeostasis.

    Science.gov (United States)

    McGlashon, Jacob M; Gorecki, Michelle C; Kozlowski, Amanda E; Thirnbeck, Caitlin K; Markan, Kathleen R; Leslie, Kirstie L; Kotas, Maya E; Potthoff, Matthew J; Richerson, George B; Gillum, Matthew P

    2015-05-05

    Thermogenic brown and beige adipocytes convert chemical energy to heat by metabolizing glucose and lipids. Serotonin (5-HT) neurons in the CNS are essential for thermoregulation and accordingly may control metabolic activity of thermogenic fat. To test this, we generated mice in which the human diphtheria toxin receptor (DTR) was selectively expressed in central 5-HT neurons. Treatment with diphtheria toxin (DT) eliminated 5-HT neurons and caused loss of thermoregulation, brown adipose tissue (BAT) steatosis, and a >50% decrease in uncoupling protein 1 (Ucp1) expression in BAT and inguinal white adipose tissue (WAT). In parallel, blood glucose increased 3.5-fold, free fatty acids 13.4-fold, and triglycerides 6.5-fold. Similar BAT and beige fat defects occurred in Lmx1b(f/f)ePet1(Cre) mice in which 5-HT neurons fail to develop in utero. We conclude 5-HT neurons play a major role in regulating glucose and lipid homeostasis, in part through recruitment and metabolic activation of brown and beige adipocytes. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Dissociation between sensitization and learning-related neuromodulation in an aplysiid species.

    Science.gov (United States)

    Erixon, N J; Demartini, L J; Wright, W G

    1999-06-14

    Previous phylogenetic analyses of learning and memory in an opisthobranch lineage uncovered a correlation between two learning-related neuromodulatory traits and their associated behavioral phenotypes. In particular, serotonin-induced increases in sensory neuron spike duration and excitability, which are thought to underlie several facilitatory forms of learning in Aplysia, appear to have been lost over the course of evolution in a distantly related aplysiid, Dolabrifera dolabrifera. This deficit is paralleled by a behavioral deficit: individuals of Dolabrifera do not express generalized sensitization (reflex enhancement of an unhabituated response after a noxious stimulus is applied outside of the reflex receptive field) or dishabituation (reflex enhancement of a habituated reflex). The goal of the present study was to confirm and extend this correlation by testing for the neuromodulatory traits and generalized sensitization in an additional species, Phyllaplysia taylori, which is closely related to Dolabrifera. Instead, our results indicated a lack of correlation between the neuromodulatory and behavioral phenotypes. In particular, sensory neuron homologues in Phyllaplysia showed the ancestral neuromodulatory phenotype typified by Aplysia. Bath-applied 10 microM serotonin significantly increased homologue spike duration and excitability. However, when trained with the identical apparatus and protocols that produced generalized sensitization in Aplysia, individuals of Phyllaplysia showed no evidence of sensitization. Thus, this species expresses the neuromodulatory phenotype of its ancestors while appearing to express the behavioral phenotype of its near relative. These results suggests that generalized sensitization can be lost during the course of evolution in the absence of a deficit in these two neuromodulatory traits, and raises the possibility that the two traits may support some other form of behavioral plasticity in Phyllaplysia. The results also raise the

  12. "FORCE" learning in recurrent neural networks as data assimilation

    Science.gov (United States)

    Duane, Gregory S.

    2017-12-01

    It is shown that the "FORCE" algorithm for learning in arbitrarily connected networks of simple neuronal units can be cast as a Kalman Filter, with a particular state-dependent form for the background error covariances. The resulting interpretation has implications for initialization of the learning algorithm, leads to an extension to include interactions between the weight updates for different neurons, and can represent relationships within groups of multiple target output signals.

  13. SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

    OpenAIRE

    Wang, Linnan; Ye, Jinmian; Zhao, Yiyang; Wu, Wei; Li, Ang; Song, Shuaiwen Leon; Xu, Zenglin; Kraska, Tim

    2018-01-01

    Going deeper and wider in neural architectures improves the accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far be...

  14. Neuronal survival in the brain: neuron type-specific mechanisms

    DEFF Research Database (Denmark)

    Pfisterer, Ulrich Gottfried; Khodosevich, Konstantin

    2017-01-01

    Neurogenic regions of mammalian brain produce many more neurons that will eventually survive and reach a mature stage. Developmental cell death affects both embryonically produced immature neurons and those immature neurons that are generated in regions of adult neurogenesis. Removal of substantial...... numbers of neurons that are not yet completely integrated into the local circuits helps to ensure that maturation and homeostatic function of neuronal networks in the brain proceed correctly. External signals from brain microenvironment together with intrinsic signaling pathways determine whether...... for survival in a certain brain region. This review focuses on how immature neurons survive during normal and impaired brain development, both in the embryonic/neonatal brain and in brain regions associated with adult neurogenesis, and emphasizes neuron type-specific mechanisms that help to survive for various...

  15. Parvalbumin+ Neurons and Npas1+ Neurons Are Distinct Neuron Classes in the Mouse External Globus Pallidus.

    Science.gov (United States)

    Hernández, Vivian M; Hegeman, Daniel J; Cui, Qiaoling; Kelver, Daniel A; Fiske, Michael P; Glajch, Kelly E; Pitt, Jason E; Huang, Tina Y; Justice, Nicholas J; Chan, C Savio

    2015-08-26

    Compelling evidence suggests that pathological activity of the external globus pallidus (GPe), a nucleus in the basal ganglia, contributes to the motor symptoms of a variety of movement disorders such as Parkinson's disease. Recent studies have challenged the idea that the GPe comprises a single, homogenous population of neurons that serves as a simple relay in the indirect pathway. However, we still lack a full understanding of the diversity of the neurons that make up the GPe. Specifically, a more precise classification scheme is needed to better describe the fundamental biology and function of different GPe neuron classes. To this end, we generated a novel multicistronic BAC (bacterial artificial chromosome) transgenic mouse line under the regulatory elements of the Npas1 gene. Using a combinatorial transgenic and immunohistochemical approach, we discovered that parvalbumin-expressing neurons and Npas1-expressing neurons in the GPe represent two nonoverlapping cell classes, amounting to 55% and 27% of the total GPe neuron population, respectively. These two genetically identified cell classes projected primarily to the subthalamic nucleus and to the striatum, respectively. Additionally, parvalbumin-expressing neurons and Npas1-expressing neurons were distinct in their autonomous and driven firing characteristics, their expression of intrinsic ion conductances, and their responsiveness to chronic 6-hydroxydopamine lesion. In summary, our data argue that parvalbumin-expressing neurons and Npas1-expressing neurons are two distinct functional classes of GPe neurons. This work revises our understanding of the GPe, and provides the foundation for future studies of its function and dysfunction. Until recently, the heterogeneity of the constituent neurons within the external globus pallidus (GPe) was not fully appreciated. We addressed this knowledge gap by discovering two principal GPe neuron classes, which were identified by their nonoverlapping expression of the

  16. Parvalbumin+ Neurons and Npas1+ Neurons Are Distinct Neuron Classes in the Mouse External Globus Pallidus

    Science.gov (United States)

    Hernández, Vivian M.; Hegeman, Daniel J.; Cui, Qiaoling; Kelver, Daniel A.; Fiske, Michael P.; Glajch, Kelly E.; Pitt, Jason E.; Huang, Tina Y.; Justice, Nicholas J.

    2015-01-01

    Compelling evidence suggests that pathological activity of the external globus pallidus (GPe), a nucleus in the basal ganglia, contributes to the motor symptoms of a variety of movement disorders such as Parkinson's disease. Recent studies have challenged the idea that the GPe comprises a single, homogenous population of neurons that serves as a simple relay in the indirect pathway. However, we still lack a full understanding of the diversity of the neurons that make up the GPe. Specifically, a more precise classification scheme is needed to better describe the fundamental biology and function of different GPe neuron classes. To this end, we generated a novel multicistronic BAC (bacterial artificial chromosome) transgenic mouse line under the regulatory elements of the Npas1 gene. Using a combinatorial transgenic and immunohistochemical approach, we discovered that parvalbumin-expressing neurons and Npas1-expressing neurons in the GPe represent two nonoverlapping cell classes, amounting to 55% and 27% of the total GPe neuron population, respectively. These two genetically identified cell classes projected primarily to the subthalamic nucleus and to the striatum, respectively. Additionally, parvalbumin-expressing neurons and Npas1-expressing neurons were distinct in their autonomous and driven firing characteristics, their expression of intrinsic ion conductances, and their responsiveness to chronic 6-hydroxydopamine lesion. In summary, our data argue that parvalbumin-expressing neurons and Npas1-expressing neurons are two distinct functional classes of GPe neurons. This work revises our understanding of the GPe, and provides the foundation for future studies of its function and dysfunction. SIGNIFICANCE STATEMENT Until recently, the heterogeneity of the constituent neurons within the external globus pallidus (GPe) was not fully appreciated. We addressed this knowledge gap by discovering two principal GPe neuron classes, which were identified by their nonoverlapping

  17. Stable long-term chronic brain mapping at the single-neuron level.

    Science.gov (United States)

    Fu, Tian-Ming; Hong, Guosong; Zhou, Tao; Schuhmann, Thomas G; Viveros, Robert D; Lieber, Charles M

    2016-10-01

    Stable in vivo mapping and modulation of the same neurons and brain circuits over extended periods is critical to both neuroscience and medicine. Current electrical implants offer single-neuron spatiotemporal resolution but are limited by such factors as relative shear motion and chronic immune responses during long-term recording. To overcome these limitations, we developed a chronic in vivo recording and stimulation platform based on flexible mesh electronics, and we demonstrated stable multiplexed local field potentials and single-unit recordings in mouse brains for at least 8 months without probe repositioning. Properties of acquired signals suggest robust tracking of the same neurons over this period. This recording and stimulation platform allowed us to evoke stable single-neuron responses to chronic electrical stimulation and to carry out longitudinal studies of brain aging in freely behaving mice. Such advantages could open up future studies in mapping and modulating changes associated with learning, aging and neurodegenerative diseases.

  18. A distance constrained synaptic plasticity model of C. elegans neuronal network

    Science.gov (United States)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  19. Stereological Investigation of the Effects of Treadmill Running Exercise on the Hippocampal Neurons in Middle-Aged APP/PS1 Transgenic Mice.

    Science.gov (United States)

    Chao, Fenglei; Jiang, Lin; Zhang, Yi; Zhou, Chunni; Zhang, Lei; Tang, Jing; Liang, Xin; Qi, Yingqiang; Zhu, Yanqing; Ma, Jing; Tang, Yong

    2018-01-01

    The risk of cognitive decline during Alzheimer's disease (AD) can be reduced if physical activity is maintained; however, the specific neural events underlying this beneficial effect are still uncertain. To quantitatively investigate the neural events underlying the effect of running exercise on middle-aged AD subjects, 12-month-old male APP/PS1 mice were randomly assigned to a control group or running group, and age-matched non-transgenic littermates were used as a wild-type group. AD running group mice were subjected to a treadmill running protocol (regular and moderate intensity) for four months. Spatial learning and memory abilities were assessed using the Morris water maze. Hippocampal amyloid plaques were observed using Thioflavin S staining and immunohistochemistry. Hippocampal volume, number of neurons, and number of newborn cells (BrdU+ cells) in the hippocampus were estimated using stereological techniques, and newborn neurons were observed using double-labelling immunofluorescence. Marked neuronal loss in both the CA1 field and dentate gyrus (DG) and deficits in both the neurogenesis and survival of new neurons in the DG of middle-aged APP/PS1 mice were observed. Running exercise could improve the spatial learning and memory abilities, reduce amyloid plaques in the hippocampi, delay neuronal loss, induce neurogenesis, and promote the survival of newborn neurons in the DG of middle-aged APP/PS1 mice. Exercise-induced protection of neurons and adult neurogenesis within the DG might be part of the important structural basis of the improved spatial learning and memory abilities observed in AD mice.

  20. Modulation, plasticity and pathophysiology of the parallel fiber-Purkinje cell synapse

    Directory of Open Access Journals (Sweden)

    Eriola Hoxha

    2016-11-01

    Full Text Available The parallel fiber-Purkinje cell synapse represents the point of maximal signal divergence in the cerebellar cortex with an estimated number of about 60 billion synaptic contacts in the rat and 100,000 billions in humans. At the same time, the Purkinje cell dendritic tree is a site of remarkable convergence of more than 100,000 parallel fiber synapses. Parallel fibers activity generates fast postsynaptic currents via AMPA receptors, and slower signals, mediated by mGlu1 receptors, resulting in Purkinje cell depolarization accompanied by sharp calcium elevation within dendritic regions. Long-term depression and long-term potentiation have been widely described for the parallel fiber-Purkinje cell synapse and have been proposed as mechanisms for motor learning. The mechanisms of induction for LTP and LTD involve different signaling mechanisms within the presynaptic terminal and/or at the postsynaptic site, promoting enduring modification in the neurotransmitter release and change in responsiveness to the neurotransmitter. The parallel fiber-Purkinje cell synapse is finely modulated by several neurotransmitters, including serotonin, noradrenaline, and acetylcholine. The ability of these neuromodulators to gate LTP and LTD at the parallel fiber-Purkinje cell synapse could, at least in part, explain their effect on cerebellar-dependent learning and memory paradigms. Overall, these findings have important implications for understanding the cerebellar involvement in a series of pathological conditions, ranging from ataxia to autism. For example, parallel fiber-Purkinje cell synapse dysfunctions have been identified in several murine models of spinocerebellar ataxia (SCA types 1, 3, 5 and 27. In some cases, the defect is specific for the AMPA receptor signaling (SCA27, while in others the mGlu1 pathway is affected (SCA1, 3, 5. Interestingly, the parallel fiber-Purkinje cell synapse has been shown to be hyper-functional in a mutant mouse model of autism