WorldWideScience

Sample records for neural plasticity learning

  1. Neural plasticity of development and learning.

    Science.gov (United States)

    Galván, Adriana

    2010-06-01

    Development and learning are powerful agents of change across the lifespan that induce robust structural and functional plasticity in neural systems. An unresolved question in developmental cognitive neuroscience is whether development and learning share the same neural mechanisms associated with experience-related neural plasticity. In this article, I outline the conceptual and practical challenges of this question, review insights gleaned from adult studies, and describe recent strides toward examining this topic across development using neuroimaging methods. I suggest that development and learning are not two completely separate constructs and instead, that they exist on a continuum. While progressive and regressive changes are central to both, the behavioral consequences associated with these changes are closely tied to the existing neural architecture of maturity of the system. Eventually, a deeper, more mechanistic understanding of neural plasticity will shed light on behavioral changes across development and, more broadly, about the underlying neural basis of cognition. (c) 2010 Wiley-Liss, Inc.

  2. Shaping the learning curve: epigenetic dynamics in neural plasticity

    Directory of Open Access Journals (Sweden)

    Zohar Ziv Bronfman

    2014-07-01

    Full Text Available A key characteristic of learning and neural plasticity is state-dependent acquisition dynamics reflected by the non-linear learning curve that links increase in learning with practice. Here we propose that the manner by which epigenetic states of individual cells change during learning contributes to the shape of the neural and behavioral learning curve. We base our suggestion on recent studies showing that epigenetic mechanisms such as DNA methylation, histone acetylation and RNA-mediated gene regulation are intimately involved in the establishment and maintenance of long-term neural plasticity, reflecting specific learning-histories and influencing future learning. Our model, which is the first to suggest a dynamic molecular account of the shape of the learning curve, leads to several testable predictions regarding the link between epigenetic dynamics at the promoter, gene-network and neural-network levels. This perspective opens up new avenues for therapeutic interventions in neurological pathologies.

  3. Learning-induced neural plasticity of speech processing before birth.

    Science.gov (United States)

    Partanen, Eino; Kujala, Teija; Näätänen, Risto; Liitola, Auli; Sambeth, Anke; Huotilainen, Minna

    2013-09-10

    Learning, the foundation of adaptive and intelligent behavior, is based on plastic changes in neural assemblies, reflected by the modulation of electric brain responses. In infancy, auditory learning implicates the formation and strengthening of neural long-term memory traces, improving discrimination skills, in particular those forming the prerequisites for speech perception and understanding. Although previous behavioral observations show that newborns react differentially to unfamiliar sounds vs. familiar sound material that they were exposed to as fetuses, the neural basis of fetal learning has not thus far been investigated. Here we demonstrate direct neural correlates of human fetal learning of speech-like auditory stimuli. We presented variants of words to fetuses; unlike infants with no exposure to these stimuli, the exposed fetuses showed enhanced brain activity (mismatch responses) in response to pitch changes for the trained variants after birth. Furthermore, a significant correlation existed between the amount of prenatal exposure and brain activity, with greater activity being associated with a higher amount of prenatal speech exposure. Moreover, the learning effect was generalized to other types of similar speech sounds not included in the training material. Consequently, our results indicate neural commitment specifically tuned to the speech features heard before birth and their memory representations.

  4. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  5. On the relationships between generative encodings, regularity, and learning abilities when evolving plastic artificial neural networks.

    Directory of Open Access Journals (Sweden)

    Paul Tonelli

    Full Text Available A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1 the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2 synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT. Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1 in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2 whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities.

  6. Neural plasticity underlying visual perceptual learning in aging.

    Science.gov (United States)

    Mishra, Jyoti; Rolle, Camarin; Gazzaley, Adam

    2015-07-01

    Healthy aging is associated with a decline in basic perceptual abilities, as well as higher-level cognitive functions such as working memory. In a recent perceptual training study using moving sweeps of Gabor stimuli, Berry et al. (2010) observed that older adults significantly improved discrimination abilities on the most challenging perceptual tasks that presented paired sweeps at rapid rates of 5 and 10 Hz. Berry et al. further showed that this perceptual training engendered transfer-of-benefit to an untrained working memory task. Here, we investigated the neural underpinnings of the improvements in these perceptual tasks, as assessed by event-related potential (ERP) recordings. Early visual ERP components time-locked to stimulus onset were compared pre- and post-training, as well as relative to a no-contact control group. The visual N1 and N2 components were significantly enhanced after training, and the N1 change correlated with improvements in perceptual discrimination on the task. Further, the change observed for the N1 and N2 was associated with the rapidity of the perceptual challenge; the visual N1 (120-150 ms) was enhanced post-training for 10 Hz sweep pairs, while the N2 (240-280 ms) was enhanced for the 5 Hz sweep pairs. We speculate that these observed post-training neural enhancements reflect improvements by older adults in the allocation of attention that is required to accurately dissociate perceptually overlapping stimuli when presented in rapid sequence. This article is part of a Special Issue entitled SI: Memory Å. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Histone Deacetylase (HDAC) Inhibitors - emerging roles in neuronal memory, learning, synaptic plasticity and neural regeneration.

    Science.gov (United States)

    Ganai, Shabir Ahmad; Ramadoss, Mahalakshmi; Mahadevan, Vijayalakshmi

    2016-01-01

    Epigenetic regulation of neuronal signalling through histone acetylation dictates transcription programs that govern neuronal memory, plasticity and learning paradigms. Histone Acetyl Transferases (HATs) and Histone Deacetylases (HDACs) are antagonistic enzymes that regulate gene expression through acetylation and deacetylation of histone proteins around which DNA is wrapped inside a eukaryotic cell nucleus. The epigenetic control of HDACs and the cellular imbalance between HATs and HDACs dictate disease states and have been implicated in muscular dystrophy, loss of memory, neurodegeneration and autistic disorders. Altering gene expression profiles through inhibition of HDACs is now emerging as a powerful technique in therapy. This review presents evolving applications of HDAC inhibitors as potential drugs in neurological research and therapy. Mechanisms that govern their expression profiles in neuronal signalling, plasticity and learning will be covered. Promising and exciting possibilities of HDAC inhibitors in memory formation, fear conditioning, ischemic stroke and neural regeneration have been detailed.

  8. Global and local missions of cAMP signaling in neural plasticity, learning and memory

    Directory of Open Access Journals (Sweden)

    Daewoo eLee

    2015-08-01

    Full Text Available The fruit fly Drosophila melanogaster has been a popular model to study cAMP signaling and resultant behaviors due to its powerful genetic approaches. All molecular components (AC, PDE, PKA, CREB, etc essential for cAMP signaling have been identified in the fly. Among them, adenylyl cyclase (AC gene rutabaga and phosphodiesterase (PDE gene dunce have been intensively studied to understand the role of cAMP signaling. Interestingly, these two mutant genes were originally identified on the basis of associative learning deficits. This commentary summarizes findings on the role of cAMP in Drosophila neuronal excitability, synaptic plasticity and memory. It mainly focuses on two distinct mechanisms (global versus local regulating excitatory and inhibitory synaptic plasticity related to cAMP homeostasis. This dual regulatory role of cAMP is to increase the strength of excitatory neural circuits on one hand, but to act locally on postsynaptic GABA receptors to decrease inhibitory synaptic plasticity on the other. Thus the action of cAMP could result in a global increase in the neural circuit excitability and memory. Implications of this cAMP signaling related to drug discovery for neural diseases are also described.

  9. Learning and retrieval behavior in recurrent neural networks with pre-synaptic dependent homeostatic plasticity

    Science.gov (United States)

    Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.

    2017-08-01

    The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.

  10. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  11. Developmental pathway genes and neural plasticity underlying emotional learning and stress-related disorders.

    Science.gov (United States)

    Maheu, Marissa E; Ressler, Kerry J

    2017-09-01

    The manipulation of neural plasticity as a means of intervening in the onset and progression of stress-related disorders retains its appeal for many researchers, despite our limited success in translating such interventions from the laboratory to the clinic. Given the challenges of identifying individual genetic variants that confer increased risk for illnesses like depression and post-traumatic stress disorder, some have turned their attention instead to focusing on so-called "master regulators" of plasticity that may provide a means of controlling these potentially impaired processes in psychiatric illnesses. The mammalian homolog of Tailless (TLX), Wnt, and the homeoprotein Otx2 have all been proposed to constitute master regulators of different forms of plasticity which have, in turn, each been implicated in learning and stress-related disorders. In the present review, we provide an overview of the changing distribution of these genes and their roles both during development and in the adult brain. We further discuss how their distinct expression profiles provide clues as to their function, and may inform their suitability as candidate drug targets in the treatment of psychiatric disorders. © 2017 Maheu and Ressler; Published by Cold Spring Harbor Laboratory Press.

  12. Developmental Pathway Genes and Neural Plasticity Underlying Emotional Learning and Stress-Related Disorders

    Science.gov (United States)

    Maheau, Marissa E.; Ressler, Kerry J.

    2017-01-01

    The manipulation of neural plasticity as a means of intervening in the onset and progression of stress-related disorders retains its appeal for many researchers, despite our limited success in translating such interventions from the laboratory to the clinic. Given the challenges of identifying individual genetic variants that confer increased risk…

  13. Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Panda, Priyadarshini; Roy, Kaushik

    2017-01-01

    Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations.

  14. Consciousness and neural plasticity

    DEFF Research Database (Denmark)

    changes or to abandon the strong identity thesis altogether. Were one to pursue a theory according to which consciousness is not an epiphenomenon to brain processes, consciousness may in fact affect its own neural basis. The neural correlate of consciousness is often seen as a stable structure, that is...

  15. A peptide mimetic targeting trans-homophilic NCAM binding sites promotes spatial learning and neural plasticity in the hippocampus

    DEFF Research Database (Denmark)

    Kraev, Igor; Henneberger, Christian; Rossetti, Clara

    2011-01-01

    The key roles played by the neural cell adhesion molecule (NCAM) in plasticity and cognition underscore this membrane protein as a relevant target to develop cognitive-enhancing drugs. However, NCAM is a structurally and functionally complex molecule with multiple domains engaged in a variety of ...

  16. Learning to Perceive Structure from Motion and Neural Plasticity in Patients with Alzheimer's Disease

    Science.gov (United States)

    Kim, Nam-Gyoon; Park, Jong-Hee

    2010-01-01

    Recent research has demonstrated that Alzheimer's disease (AD) affects the visual sensory pathways, producing a variety of visual deficits, including the capacity to perceive structure-from-motion (SFM). Because the sensory areas of the adult brain are known to retain a large degree of plasticity, the present study was conducted to explore whether…

  17. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin, E-mail: xmli@cqu.edu.cn [Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044 (China); College of Automation, Chongqing University, Chongqing 400044 (China)

    2015-11-15

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  18. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    International Nuclear Information System (INIS)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-01-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing

  19. A Neural Circuit for Acoustic Navigation combining Heterosynaptic and Non-synaptic Plasticity that learns Stable Trajectories

    DEFF Research Database (Denmark)

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    controllers be resolved in a manner that generates consistent and stable robot trajectories? We propose a neural circuit that minimises this conflict by learning sensorimotor mappings as neuronal transfer functions between the perceived sound direction and wheel velocities of a simulated non-holonomic mobile...

  20. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. A data science approach to candidate gene selection of pain regarded as a process of learning and neural plasticity.

    Science.gov (United States)

    Ultsch, Alfred; Kringel, Dario; Kalso, Eija; Mogil, Jeffrey S; Lötsch, Jörn

    2016-12-01

    The increasing availability of "big data" enables novel research approaches to chronic pain while also requiring novel techniques for data mining and knowledge discovery. We used machine learning to combine the knowledge about n = 535 genes identified empirically as relevant to pain with the knowledge about the functions of thousands of genes. Starting from an accepted description of chronic pain as displaying systemic features described by the terms "learning" and "neuronal plasticity," a functional genomics analysis proposed that among the functions of the 535 "pain genes," the biological processes "learning or memory" (P = 8.6 × 10) and "nervous system development" (P = 2.4 × 10) are statistically significantly overrepresented as compared with the annotations to these processes expected by chance. After establishing that the hypothesized biological processes were among important functional genomics features of pain, a subset of n = 34 pain genes were found to be annotated with both Gene Ontology terms. Published empirical evidence supporting their involvement in chronic pain was identified for almost all these genes, including 1 gene identified in March 2016 as being involved in pain. By contrast, such evidence was virtually absent in a randomly selected set of 34 other human genes. Hence, the present computational functional genomics-based method can be used for candidate gene selection, providing an alternative to established methods.

  2. Tinnitus and neural plasticity of the brain

    NARCIS (Netherlands)

    Bartels, Hilke; Staal, Michiel J.; Albers, Frans W. J.

    Objective: To describe the current ideas about the manifestations of neural plasticity in generating tinnitus. Data Sources: Recently published source articles were identified using MEDLINE, PubMed, and Cochrane Library according to the key words mentioned below. Study Selection: Review articles and

  3. Learning from neural control.

    Science.gov (United States)

    Wang, Cong; Hill, David J

    2006-01-01

    One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach.

  4. Modulation of Hippocampal Neural Plasticity by Glucose-Related Signaling

    Directory of Open Access Journals (Sweden)

    Marco Mainardi

    2015-01-01

    Full Text Available Hormones and peptides involved in glucose homeostasis are emerging as important modulators of neural plasticity. In this regard, increasing evidence shows that molecules such as insulin, insulin-like growth factor-I, glucagon-like peptide-1, and ghrelin impact on the function of the hippocampus, which is a key area for learning and memory. Indeed, all these factors affect fundamental hippocampal properties including synaptic plasticity (i.e., synapse potentiation and depression, structural plasticity (i.e., dynamics of dendritic spines, and adult neurogenesis, thus leading to modifications in cognitive performance. Here, we review the main mechanisms underlying the effects of glucose metabolism on hippocampal physiology. In particular, we discuss the role of these signals in the modulation of cognitive functions and their potential implications in dysmetabolism-related cognitive decline.

  5. Critical neural networks with short- and long-term plasticity

    Science.gov (United States)

    Michiels van Kessenich, L.; Luković, M.; de Arcangelis, L.; Herrmann, H. J.

    2018-03-01

    In recent years self organized critical neuronal models have provided insights regarding the origin of the experimentally observed avalanching behavior of neuronal systems. It has been shown that dynamical synapses, as a form of short-term plasticity, can cause critical neuronal dynamics. Whereas long-term plasticity, such as Hebbian or activity dependent plasticity, have a crucial role in shaping the network structure and endowing neural systems with learning abilities. In this work we provide a model which combines both plasticity mechanisms, acting on two different time scales. The measured avalanche statistics are compatible with experimental results for both the avalanche size and duration distribution with biologically observed percentages of inhibitory neurons. The time series of neuronal activity exhibits temporal bursts leading to 1 /f decay in the power spectrum. The presence of long-term plasticity gives the system the ability to learn binary rules such as xor, providing the foundation of future research on more complicated tasks such as pattern recognition.

  6. Reading in the dark: neural correlates and cross-modal plasticity for learning to read entire words without visual experience.

    Science.gov (United States)

    Sigalov, Nadine; Maidenbaum, Shachar; Amedi, Amir

    2016-03-01

    Cognitive neuroscience has long attempted to determine the ways in which cortical selectivity develops, and the impact of nature vs. nurture on it. Congenital blindness (CB) offers a unique opportunity to test this question as the brains of blind individuals develop without visual experience. Here we approach this question through the reading network. Several areas in the visual cortex have been implicated as part of the reading network, and one of the main ones among them is the VWFA, which is selective to the form of letters and words. But what happens in the CB brain? On the one hand, it has been shown that cross-modal plasticity leads to the recruitment of occipital areas, including the VWFA, for linguistic tasks. On the other hand, we have recently demonstrated VWFA activity for letters in contrast to other visual categories when the information is provided via other senses such as touch or audition. Which of these tasks is more dominant? By which mechanism does the CB brain process reading? Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of the letters we compare reading with semantic and scrambled conditions in a group of CB. We found activation in early auditory and visual cortices during the early processing phase (letter), while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This further supports the notion that many visual regions in general, even early visual areas, also maintain a predilection for task processing even when the modality is variable and in spite of putative lifelong linguistic cross-modal plasticity. Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider

  7. Psychedelics Promote Structural and Functional Neural Plasticity

    Directory of Open Access Journals (Sweden)

    Calvin Ly

    2018-06-01

    Full Text Available Summary: Atrophy of neurons in the prefrontal cortex (PFC plays a key role in the pathophysiology of depression and related disorders. The ability to promote both structural and functional plasticity in the PFC has been hypothesized to underlie the fast-acting antidepressant properties of the dissociative anesthetic ketamine. Here, we report that, like ketamine, serotonergic psychedelics are capable of robustly increasing neuritogenesis and/or spinogenesis both in vitro and in vivo. These changes in neuronal structure are accompanied by increased synapse number and function, as measured by fluorescence microscopy and electrophysiology. The structural changes induced by psychedelics appear to result from stimulation of the TrkB, mTOR, and 5-HT2A signaling pathways and could possibly explain the clinical effectiveness of these compounds. Our results underscore the therapeutic potential of psychedelics and, importantly, identify several lead scaffolds for medicinal chemistry efforts focused on developing plasticity-promoting compounds as safe, effective, and fast-acting treatments for depression and related disorders. : Ly et al. demonstrate that psychedelic compounds such as LSD, DMT, and DOI increase dendritic arbor complexity, promote dendritic spine growth, and stimulate synapse formation. These cellular effects are similar to those produced by the fast-acting antidepressant ketamine and highlight the potential of psychedelics for treating depression and related disorders. Keywords: neural plasticity, psychedelic, spinogenesis, synaptogenesis, depression, LSD, DMT, ketamine, noribogaine, MDMA

  8. Neural plasticity lessons from disorders of consciousness

    Directory of Open Access Journals (Sweden)

    Athena eDemertzi

    2011-02-01

    Full Text Available Communication and intentional behavior are supported by the brain’s integrity at a structural and a functional level. When widespread loss of cerebral connectivity is brought about as a result of a severe brain injury, in many cases patients are not capable of conscious interactive behavior and are said to suffer from disorders of consciousness (e.g., coma, vegetative state /unresponsive wakefulness syndrome, minimally conscious states. This lesion paradigm has offered not only clinical insights, as how to improve diagnosis, prognosis and treatment, but also put forward scientific opportunities to study the brain’s plastic abilities. We here review interventional and observational studies performed in severely brain-injured patients with regards to recovery of consciousness. The study of the recovered conscious brain (spontaneous and/or after surgical or pharmacologic interventions, suggests a link between some specific brain areas and the capacity of the brain to sustain conscious experience, challenging at the same time the notion of fixed temporal boundaries in rehabilitative processes. Altered functional connectivity, cerebral structural reorganization as well as behavioral amelioration after invasive treatments will be discussed as the main indices for plasticity in these challenging patients. The study of patients with chronic disorders of consciousness may, thus, provide further insights not only at a clinical level (i.e., medical management and rehabilitation but also from a scientific-theoretical perspective (i.e., the brain’s plastic abilities and the pursuit of the neural correlate of consciousness.

  9. Learning with three factors: modulating Hebbian plasticity with errors.

    Science.gov (United States)

    Kuśmierz, Łukasz; Isomura, Takuya; Toyoizumi, Taro

    2017-10-01

    Synaptic plasticity is a central theme in neuroscience. A framework of three-factor learning rules provides a powerful abstraction, helping to navigate through the abundance of models of synaptic plasticity. It is well-known that the dopamine modulation of learning is related to reward, but theoretical models predict other functional roles of the modulatory third factor; it may encode errors for supervised learning, summary statistics of the population activity for unsupervised learning or attentional feedback. Specialized structures may be needed in order to generate and propagate third factors in the neural network. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    DEFF Research Database (Denmark)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin

    2015-01-01

    correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking...... dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural...... mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online...

  11. Neural plasticity and its initiating conditions in tinnitus.

    Science.gov (United States)

    Roberts, L E

    2018-03-01

    Deafferentation caused by cochlear pathology (which can be hidden from the audiogram) activates forms of neural plasticity in auditory pathways, generating tinnitus and its associated conditions including hyperacusis. This article discusses tinnitus mechanisms and suggests how these mechanisms may relate to those involved in normal auditory information processing. Research findings from animal models of tinnitus and from electromagnetic imaging of tinnitus patients are reviewed which pertain to the role of deafferentation and neural plasticity in tinnitus and hyperacusis. Auditory neurons compensate for deafferentation by increasing their input/output functions (gain) at multiple levels of the auditory system. Forms of homeostatic plasticity are believed to be responsible for this neural change, which increases the spontaneous and driven activity of neurons in central auditory structures in animals expressing behavioral evidence of tinnitus. Another tinnitus correlate, increased neural synchrony among the affected neurons, is forged by spike-timing-dependent neural plasticity in auditory pathways. Slow oscillations generated by bursting thalamic neurons verified in tinnitus animals appear to modulate neural plasticity in the cortex, integrating tinnitus neural activity with information in brain regions supporting memory, emotion, and consciousness which exhibit increased metabolic activity in tinnitus patients. The latter process may be induced by transient auditory events in normal processing but it persists in tinnitus, driven by phantom signals from the auditory pathway. Several tinnitus therapies attempt to suppress tinnitus through plasticity, but repeated sessions will likely be needed to prevent tinnitus activity from returning owing to deafferentation as its initiating condition.

  12. Learning and plasticity in adolescence

    OpenAIRE

    Fuhrmann, Delia Ute Dorothea

    2017-01-01

    Adolescence is the period of life between puberty and relative independence. It is a time during which the human brain undergoes protracted changes - particularly in the frontal, parietal and temporal cortices. These changes have been linked to improvements in cognitive performance; and are thought to render adolescence a period of relatively high levels of plasticity, during which the environment has a heightened impact on brain development and behaviour. This thesis investigates learning an...

  13. Spike timing analysis in neural networks with unsupervised synaptic plasticity

    Science.gov (United States)

    Mizusaki, B. E. P.; Agnes, E. J.; Brunnet, L. G.; Erichsen, R., Jr.

    2013-01-01

    The synaptic plasticity rules that sculpt a neural network architecture are key elements to understand cortical processing, as they may explain the emergence of stable, functional activity, while avoiding runaway excitation. For an associative memory framework, they should be built in a way as to enable the network to reproduce a robust spatio-temporal trajectory in response to an external stimulus. Still, how these rules may be implemented in recurrent networks and the way they relate to their capacity of pattern recognition remains unclear. We studied the effects of three phenomenological unsupervised rules in sparsely connected recurrent networks for associative memory: spike-timing-dependent-plasticity, short-term-plasticity and an homeostatic scaling. The system stability is monitored during the learning process of the network, as the mean firing rate converges to a value determined by the homeostatic scaling. Afterwards, it is possible to measure the recovery efficiency of the activity following each initial stimulus. This is evaluated by a measure of the correlation between spike fire timings, and we analysed the full memory separation capacity and limitations of this system.

  14. A framework for plasticity implementation on the SpiNNaker neural architecture.

    Science.gov (United States)

    Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A; Furber, Steve B; Benosman, Ryad B

    2014-01-01

    Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.

  15. Neural Plasticity and Neurorehabilitation: Teaching the New Brain Old Tricks

    Science.gov (United States)

    Kleim, Jeffrey A.

    2011-01-01

    Following brain injury or disease there are widespread biochemical, anatomical and physiological changes that result in what might be considered a new, very different brain. This adapted brain is forced to reacquire behaviors lost as a result of the injury or disease and relies on neural plasticity within the residual neural circuits. The same…

  16. Learning and coding in biological neural networks

    Science.gov (United States)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  17. Short-term synaptic plasticity and heterogeneity in neural systems

    Science.gov (United States)

    Mejias, J. F.; Kappen, H. J.; Longtin, A.; Torres, J. J.

    2013-01-01

    We review some recent results on neural dynamics and information processing which arise when considering several biophysical factors of interest, in particular, short-term synaptic plasticity and neural heterogeneity. The inclusion of short-term synaptic plasticity leads to enhanced long-term memory capacities, a higher robustness of memory to noise, and irregularity in the duration of the so-called up cortical states. On the other hand, considering some level of neural heterogeneity in neuron models allows neural systems to optimize information transmission in rate coding and temporal coding, two strategies commonly used by neurons to codify information in many brain areas. In all these studies, analytical approximations can be made to explain the underlying dynamics of these neural systems.

  18. Neural plasticity and behavior - sixty years of conceptual advances.

    Science.gov (United States)

    Sweatt, J David

    2016-10-01

    This brief review summarizes 60 years of conceptual advances that have demonstrated a role for active changes in neuronal connectivity as a controller of behavior and behavioral change. Seminal studies in the first phase of the six-decade span of this review firmly established the cellular basis of behavior - a concept that we take for granted now, but which was an open question at the time. Hebbian plasticity, including long-term potentiation and long-term depression, was then discovered as being important for local circuit refinement in the context of memory formation and behavioral change and stabilization in the mammalian central nervous system. Direct demonstration of plasticity of neuronal circuit function in vivo, for example, hippocampal neurons forming place cell firing patterns, extended this concept. However, additional neurophysiologic and computational studies demonstrated that circuit development and stabilization additionally relies on non-Hebbian, homoeostatic, forms of plasticity, such as synaptic scaling and control of membrane intrinsic properties. Activity-dependent neurodevelopment was found to be associated with cell-wide adjustments in post-synaptic receptor density, and found to occur in conjunction with synaptic pruning. Pioneering cellular neurophysiologic studies demonstrated the critical roles of transmembrane signal transduction, NMDA receptor regulation, regulation of neural membrane biophysical properties, and back-propagating action potential in critical time-dependent coincidence detection in behavior-modifying circuits. Concerning the molecular mechanisms underlying these processes, regulation of gene transcription was found to serve as a bridge between experience and behavioral change, closing the 'nature versus nurture' divide. Both active DNA (de)methylation and regulation of chromatin structure have been validated as crucial regulators of gene transcription during learning. The discovery of protein synthesis dependence on the

  19. Entropy Learning in Neural Network

    Directory of Open Access Journals (Sweden)

    Geok See Ng

    2017-12-01

    Full Text Available In this paper, entropy term is used in the learning phase of a neural network.  As learning progresses, more hidden nodes get into saturation.  The early creation of such hidden nodes may impair generalisation.  Hence entropy approach is proposed to dampen the early creation of such nodes.  The entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes.  At the end of learning, the less important nodes can then be eliminated to reduce the memory requirements of the neural network.

  20. Evolving Neural Turing Machines for Reward-based Learning

    DEFF Research Database (Denmark)

    Greve, Rasmus Boll; Jacobsen, Emil Juul; Risi, Sebastian

    2016-01-01

    An unsolved problem in neuroevolution (NE) is to evolve artificial neural networks (ANN) that can store and use information to change their behavior online. While plastic neural networks have shown promise in this context, they have difficulties retaining information over longer periods of time...... version of the double T-Maze, a complex reinforcement-like learning problem. In the T-Maze learning task the agent uses the memory bank to display adaptive behavior that normally requires a plastic ANN, thereby suggesting a complementary and effective mechanism for adaptive behavior in NE....

  1. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  2. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  3. Introduction to spiking neural networks: Information processing, learning and applications.

    Science.gov (United States)

    Ponulak, Filip; Kasinski, Andrzej

    2011-01-01

    The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.

  4. Enhancement of signal sensitivity in a heterogeneous neural network refined from synaptic plasticity

    Energy Technology Data Exchange (ETDEWEB)

    Li Xiumin; Small, Michael, E-mail: ensmall@polyu.edu.h, E-mail: 07901216r@eie.polyu.edu.h [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong)

    2010-08-15

    Long-term synaptic plasticity induced by neural activity is of great importance in informing the formation of neural connectivity and the development of the nervous system. It is reasonable to consider self-organized neural networks instead of prior imposition of a specific topology. In this paper, we propose a novel network evolved from two stages of the learning process, which are respectively guided by two experimentally observed synaptic plasticity rules, i.e. the spike-timing-dependent plasticity (STDP) mechanism and the burst-timing-dependent plasticity (BTDP) mechanism. Due to the existence of heterogeneity in neurons that exhibit different degrees of excitability, a two-level hierarchical structure is obtained after the synaptic refinement. This self-organized network shows higher sensitivity to afferent current injection compared with alternative archetypal networks with different neural connectivity. Statistical analysis also demonstrates that it has the small-world properties of small shortest path length and high clustering coefficients. Thus the selectively refined connectivity enhances the ability of neuronal communications and improves the efficiency of signal transmission in the network.

  5. Enhancement of signal sensitivity in a heterogeneous neural network refined from synaptic plasticity

    International Nuclear Information System (INIS)

    Li Xiumin; Small, Michael

    2010-01-01

    Long-term synaptic plasticity induced by neural activity is of great importance in informing the formation of neural connectivity and the development of the nervous system. It is reasonable to consider self-organized neural networks instead of prior imposition of a specific topology. In this paper, we propose a novel network evolved from two stages of the learning process, which are respectively guided by two experimentally observed synaptic plasticity rules, i.e. the spike-timing-dependent plasticity (STDP) mechanism and the burst-timing-dependent plasticity (BTDP) mechanism. Due to the existence of heterogeneity in neurons that exhibit different degrees of excitability, a two-level hierarchical structure is obtained after the synaptic refinement. This self-organized network shows higher sensitivity to afferent current injection compared with alternative archetypal networks with different neural connectivity. Statistical analysis also demonstrates that it has the small-world properties of small shortest path length and high clustering coefficients. Thus the selectively refined connectivity enhances the ability of neuronal communications and improves the efficiency of signal transmission in the network.

  6. Psychedelics Promote Structural and Functional Neural Plasticity.

    Science.gov (United States)

    Ly, Calvin; Greb, Alexandra C; Cameron, Lindsay P; Wong, Jonathan M; Barragan, Eden V; Wilson, Paige C; Burbach, Kyle F; Soltanzadeh Zarandi, Sina; Sood, Alexander; Paddy, Michael R; Duim, Whitney C; Dennis, Megan Y; McAllister, A Kimberley; Ori-McKenney, Kassandra M; Gray, John A; Olson, David E

    2018-06-12

    Atrophy of neurons in the prefrontal cortex (PFC) plays a key role in the pathophysiology of depression and related disorders. The ability to promote both structural and functional plasticity in the PFC has been hypothesized to underlie the fast-acting antidepressant properties of the dissociative anesthetic ketamine. Here, we report that, like ketamine, serotonergic psychedelics are capable of robustly increasing neuritogenesis and/or spinogenesis both in vitro and in vivo. These changes in neuronal structure are accompanied by increased synapse number and function, as measured by fluorescence microscopy and electrophysiology. The structural changes induced by psychedelics appear to result from stimulation of the TrkB, mTOR, and 5-HT2A signaling pathways and could possibly explain the clinical effectiveness of these compounds. Our results underscore the therapeutic potential of psychedelics and, importantly, identify several lead scaffolds for medicinal chemistry efforts focused on developing plasticity-promoting compounds as safe, effective, and fast-acting treatments for depression and related disorders. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  7. Environmental effects on fish neural plasticity and cognition.

    Science.gov (United States)

    Ebbesson, L O E; Braithwaite, V A

    2012-12-01

    Most fishes experiencing challenging environments are able to adjust and adapt their physiology and behaviour to help them cope more effectively. Much of this flexibility is supported and influenced by cognition and neural plasticity. The understanding of fish cognition and the role played by different regions of the brain has improved significantly in recent years. Techniques such as lesioning, tract tracing and quantifying changes in gene expression help in mapping specialized brain areas. It is now recognized that the fish brain remains plastic throughout a fish's life and that it continues to be sensitive to environmental challenges. The early development of fish brains is shaped by experiences with the environment and this can promote positive and negative effects on both neural plasticity and cognitive ability. This review focuses on what is known about the interactions between the environment, the telencephalon and cognition. Examples are used from a diverse array of fish species, but there could be a lot to be gained by focusing research on neural plasticity and cognition in fishes for which there is already a wealth of knowledge relating to their physiology, behaviour and natural history, e.g. the Salmonidae. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  8. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks

    Directory of Open Access Journals (Sweden)

    Runchun Mark Wang

    2015-05-01

    Full Text Available We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP and Spike Timing Dependent Delay Plasticity (STDDP. We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 2^26 (64M synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted and/or delayed pre-synaptic spike to the target synapse in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 2^36 (64G synaptic adaptors on a current high-end FPGA platform.

  9. Learning to learn - intrinsic plasticity as a metaplasticity mechanism for memory formation.

    Science.gov (United States)

    Sehgal, Megha; Song, Chenghui; Ehlers, Vanessa L; Moyer, James R

    2013-10-01

    "Use it or lose it" is a popular adage often associated with use-dependent enhancement of cognitive abilities. Much research has focused on understanding exactly how the brain changes as a function of experience. Such experience-dependent plasticity involves both structural and functional alterations that contribute to adaptive behaviors, such as learning and memory, as well as maladaptive behaviors, including anxiety disorders, phobias, and posttraumatic stress disorder. With the advancing age of our population, understanding how use-dependent plasticity changes across the lifespan may also help to promote healthy brain aging. A common misconception is that such experience-dependent plasticity (e.g., associative learning) is synonymous with synaptic plasticity. Other forms of plasticity also play a critical role in shaping adaptive changes within the nervous system, including intrinsic plasticity - a change in the intrinsic excitability of a neuron. Intrinsic plasticity can result from a change in the number, distribution or activity of various ion channels located throughout the neuron. Here, we review evidence that intrinsic plasticity is an important and evolutionarily conserved neural correlate of learning. Intrinsic plasticity acts as a metaplasticity mechanism by lowering the threshold for synaptic changes. Thus, learning-related intrinsic changes can facilitate future synaptic plasticity and learning. Such intrinsic changes can impact the allocation of a memory trace within a brain structure, and when compromised, can contribute to cognitive decline during the aging process. This unique role of intrinsic excitability can provide insight into how memories are formed and, more interestingly, how neurons that participate in a memory trace are selected. Most importantly, modulation of intrinsic excitability can allow for regulation of learning ability - this can prevent or provide treatment for cognitive decline not only in patients with clinical disorders but

  10. Learning to learn – intrinsic plasticity as a metaplasticity mechanism for memory formation

    Science.gov (United States)

    Sehgal, Megha; Song, Chenghui; Ehlers, Vanessa L.; Moyer, James R.

    2013-01-01

    “Use it or lose it” is a popular adage often associated with use-dependent enhancement of cognitive abilities. Much research has focused on understanding exactly how the brain changes as a function of experience. Such experience-dependent plasticity involves both structural and functional alterations that contribute to adaptive behaviors, such as learning and memory, as well as maladaptive behaviors, including anxiety disorders, phobias, and posttraumatic stress disorder. With the advancing age of our population, understanding how use-dependent plasticity changes across the lifespan may also help to promote healthy brain aging. A common misconception is that such experience-dependent plasticity (e.g., associative learning) is synonymous with synaptic plasticity. Other forms of plasticity also play a critical role in shaping adaptive changes within the nervous system, including intrinsic plasticity – a change in the intrinsic excitability of a neuron. Intrinsic plasticity can result from a change in the number, distribution or activity of various ion channels located throughout the neuron. Here, we review evidence that intrinsic plasticity is an important and evolutionarily conserved neural correlate of learning. Intrinsic plasticity acts as a metaplasticity mechanism by lowering the threshold for synaptic changes. Thus, learning-related intrinsic changes can facilitate future synaptic plasticity and learning. Such intrinsic changes can impact the allocation of a memory trace within a brain structure, and when compromised, can contribute to cognitive decline during the aging process. This unique role of intrinsic excitability can provide insight into how memories are formed and, more interestingly, how neurons that participate in a memory trace are selected. Most importantly, modulation of intrinsic excitability can allow for regulation of learning ability – this can prevent or provide treatment for cognitive decline not only in patients with clinical

  11. Learning in Artificial Neural Systems

    Science.gov (United States)

    Matheus, Christopher J.; Hohensee, William E.

    1987-01-01

    This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined.

  12. Neural correlates of face gender discrimination learning.

    Science.gov (United States)

    Su, Junzhu; Tan, Qingleng; Fang, Fang

    2013-04-01

    Using combined psychophysics and event-related potentials (ERPs), we investigated the effect of perceptual learning on face gender discrimination and probe the neural correlates of the learning effect. Human subjects were trained to perform a gender discrimination task with male or female faces. Before and after training, they were tested with the trained faces and other faces with the same and opposite genders. ERPs responding to these faces were recorded. Psychophysical results showed that training significantly improved subjects' discrimination performance and the improvement was specific to the trained gender, as well as to the trained identities. The training effect indicates that learning occurs at two levels-the category level (gender) and the exemplar level (identity). ERP analyses showed that the gender and identity learning was associated with the N170 latency reduction at the left occipital-temporal area and the N170 amplitude reduction at the right occipital-temporal area, respectively. These findings provide evidence for the facilitation model and the sharpening model on neuronal plasticity from visual experience, suggesting a faster processing speed and a sparser representation of face induced by perceptual learning.

  13. The malleable brain: plasticity of neural circuits and behavior - a review from students to students.

    Science.gov (United States)

    Schaefer, Natascha; Rotermund, Carola; Blumrich, Eva-Maria; Lourenco, Mychael V; Joshi, Pooja; Hegemann, Regina U; Jamwal, Sumit; Ali, Nilufar; García Romero, Ezra Michelet; Sharma, Sorabh; Ghosh, Shampa; Sinha, Jitendra K; Loke, Hannah; Jain, Vishal; Lepeta, Katarzyna; Salamian, Ahmad; Sharma, Mahima; Golpich, Mojtaba; Nawrotek, Katarzyna; Paidi, Ramesh K; Shahidzadeh, Sheila M; Piermartiri, Tetsade; Amini, Elham; Pastor, Veronica; Wilson, Yvette; Adeniyi, Philip A; Datusalia, Ashok K; Vafadari, Benham; Saini, Vedangana; Suárez-Pozos, Edna; Kushwah, Neetu; Fontanet, Paula; Turner, Anthony J

    2017-06-20

    One of the most intriguing features of the brain is its ability to be malleable, allowing it to adapt continually to changes in the environment. Specific neuronal activity patterns drive long-lasting increases or decreases in the strength of synaptic connections, referred to as long-term potentiation and long-term depression, respectively. Such phenomena have been described in a variety of model organisms, which are used to study molecular, structural, and functional aspects of synaptic plasticity. This review originated from the first International Society for Neurochemistry (ISN) and Journal of Neurochemistry (JNC) Flagship School held in Alpbach, Austria (Sep 2016), and will use its curriculum and discussions as a framework to review some of the current knowledge in the field of synaptic plasticity. First, we describe the role of plasticity during development and the persistent changes of neural circuitry occurring when sensory input is altered during critical developmental stages. We then outline the signaling cascades resulting in the synthesis of new plasticity-related proteins, which ultimately enable sustained changes in synaptic strength. Going beyond the traditional understanding of synaptic plasticity conceptualized by long-term potentiation and long-term depression, we discuss system-wide modifications and recently unveiled homeostatic mechanisms, such as synaptic scaling. Finally, we describe the neural circuits and synaptic plasticity mechanisms driving associative memory and motor learning. Evidence summarized in this review provides a current view of synaptic plasticity in its various forms, offers new insights into the underlying mechanisms and behavioral relevance, and provides directions for future research in the field of synaptic plasticity. Read the Editorial Highlight for this article on doi: 10.1111/jnc.14102. © 2017 International Society for Neurochemistry.

  14. Neural Plasticity Associated with Hippocampal PKA-CREB and NMDA Signaling Is Involved in the Antidepressant Effect of Repeated Low Dose of Yueju Pill on Chronic Mouse Model of Learned Helplessness

    Directory of Open Access Journals (Sweden)

    Zhilu Zou

    2017-01-01

    Full Text Available Yueju pill is a traditional Chinese medicine formulated to treat syndromes of mood disorders. Here, we investigated the therapeutic effect of repeated low dose of Yueju in the animal model mimicking clinical long-term depression condition and the role of neural plasticity associated with PKA- (protein kinase A- CREB (cAMP response element binding protein and NMDA (N-methyl-D-aspartate signaling. We showed that a single low dose of Yueju demonstrated antidepressant effects in tests of tail suspension, forced swim, and novelty-suppressed feeding. A chronic learned helplessness (LH protocol resulted in a long-term depressive-like condition. Repeated administration of Yueju following chronic LH remarkably alleviated all of depressive-like symptoms measured, whereas conventional antidepressant fluoxetine only showed a minor improvement. In the hippocampus, Yueju and fluoxetine both normalized brain-derived neurotrophic factor (BDNF and PKA level. Only Yueju, not fluoxetine, rescued the deficits in CREB signaling. The chronic LH upregulated the expression of NMDA receptor subunits NR1, NR2A, and NR2B, which were all attenuated by Yueju. Furthermore, intracerebraventricular administration of NMDA blunted the antidepressant effect of Yueju. These findings supported the antidepressant efficacy of repeated routine low dose of Yueju in a long-term depression model and the critical role of CREB and NMDA signaling.

  15. Neural Plasticity Associated with Hippocampal PKA-CREB and NMDA Signaling Is Involved in the Antidepressant Effect of Repeated Low Dose of Yueju Pill on Chronic Mouse Model of Learned Helplessness.

    Science.gov (United States)

    Zou, Zhilu; Chen, Yin; Shen, Qinqin; Guo, Xiaoyan; Zhang, Yuxuan; Chen, Gang

    2017-01-01

    Yueju pill is a traditional Chinese medicine formulated to treat syndromes of mood disorders. Here, we investigated the therapeutic effect of repeated low dose of Yueju in the animal model mimicking clinical long-term depression condition and the role of neural plasticity associated with PKA- (protein kinase A-) CREB (cAMP response element binding protein) and NMDA (N-methyl-D-aspartate) signaling. We showed that a single low dose of Yueju demonstrated antidepressant effects in tests of tail suspension, forced swim, and novelty-suppressed feeding. A chronic learned helplessness (LH) protocol resulted in a long-term depressive-like condition. Repeated administration of Yueju following chronic LH remarkably alleviated all of depressive-like symptoms measured, whereas conventional antidepressant fluoxetine only showed a minor improvement. In the hippocampus, Yueju and fluoxetine both normalized brain-derived neurotrophic factor (BDNF) and PKA level. Only Yueju, not fluoxetine, rescued the deficits in CREB signaling. The chronic LH upregulated the expression of NMDA receptor subunits NR1, NR2A, and NR2B, which were all attenuated by Yueju. Furthermore, intracerebraventricular administration of NMDA blunted the antidepressant effect of Yueju. These findings supported the antidepressant efficacy of repeated routine low dose of Yueju in a long-term depression model and the critical role of CREB and NMDA signaling.

  16. Different propagation speeds of recalled sequences in plastic spiking neural networks

    Science.gov (United States)

    Huang, Xuhui; Zheng, Zhigang; Hu, Gang; Wu, Si; Rasch, Malte J.

    2015-03-01

    Neural networks can generate spatiotemporal patterns of spike activity. Sequential activity learning and retrieval have been observed in many brain areas, and e.g. is crucial for coding of episodic memory in the hippocampus or generating temporal patterns during song production in birds. In a recent study, a sequential activity pattern was directly entrained onto the neural activity of the primary visual cortex (V1) of rats and subsequently successfully recalled by a local and transient trigger. It was observed that the speed of activity propagation in coordinates of the retinotopically organized neural tissue was constant during retrieval regardless how the speed of light stimulation sweeping across the visual field during training was varied. It is well known that spike-timing dependent plasticity (STDP) is a potential mechanism for embedding temporal sequences into neural network activity. How training and retrieval speeds relate to each other and how network and learning parameters influence retrieval speeds, however, is not well described. We here theoretically analyze sequential activity learning and retrieval in a recurrent neural network with realistic synaptic short-term dynamics and STDP. Testing multiple STDP rules, we confirm that sequence learning can be achieved by STDP. However, we found that a multiplicative nearest-neighbor (NN) weight update rule generated weight distributions and recall activities that best matched the experiments in V1. Using network simulations and mean-field analysis, we further investigated the learning mechanisms and the influence of network parameters on recall speeds. Our analysis suggests that a multiplicative STDP rule with dominant NN spike interaction might be implemented in V1 since recall speed was almost constant in an NMDA-dominant regime. Interestingly, in an AMPA-dominant regime, neural circuits might exhibit recall speeds that instead follow the change in stimulus speeds. This prediction could be tested in

  17. Neural Plastic Effects of Cognitive Training on Aging Brain

    Directory of Open Access Journals (Sweden)

    Natalie T. Y. Leung

    2015-01-01

    Full Text Available Increasing research has evidenced that our brain retains a capacity to change in response to experience until late adulthood. This implies that cognitive training can possibly ameliorate age-associated cognitive decline by inducing training-specific neural plastic changes at both neural and behavioral levels. This longitudinal study examined the behavioral effects of a systematic thirteen-week cognitive training program on attention and working memory of older adults who were at risk of cognitive decline. These older adults were randomly assigned to the Cognitive Training Group (n=109 and the Active Control Group (n=100. Findings clearly indicated that training induced improvement in auditory and visual-spatial attention and working memory. The training effect was specific to the experience provided because no significant difference in verbal and visual-spatial memory between the two groups was observed. This pattern of findings is consistent with the prediction and the principle of experience-dependent neuroplasticity. Findings of our study provided further support to the notion that the neural plastic potential continues until older age. The baseline cognitive status did not correlate with pre- versus posttraining changes to any cognitive variables studied, suggesting that the initial cognitive status may not limit the neuroplastic potential of the brain at an old age.

  18. Neural Plastic Effects of Cognitive Training on Aging Brain.

    Science.gov (United States)

    Leung, Natalie T Y; Tam, Helena M K; Chu, Leung W; Kwok, Timothy C Y; Chan, Felix; Lam, Linda C W; Woo, Jean; Lee, Tatia M C

    2015-01-01

    Increasing research has evidenced that our brain retains a capacity to change in response to experience until late adulthood. This implies that cognitive training can possibly ameliorate age-associated cognitive decline by inducing training-specific neural plastic changes at both neural and behavioral levels. This longitudinal study examined the behavioral effects of a systematic thirteen-week cognitive training program on attention and working memory of older adults who were at risk of cognitive decline. These older adults were randomly assigned to the Cognitive Training Group (n = 109) and the Active Control Group (n = 100). Findings clearly indicated that training induced improvement in auditory and visual-spatial attention and working memory. The training effect was specific to the experience provided because no significant difference in verbal and visual-spatial memory between the two groups was observed. This pattern of findings is consistent with the prediction and the principle of experience-dependent neuroplasticity. Findings of our study provided further support to the notion that the neural plastic potential continues until older age. The baseline cognitive status did not correlate with pre- versus posttraining changes to any cognitive variables studied, suggesting that the initial cognitive status may not limit the neuroplastic potential of the brain at an old age.

  19. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    Directory of Open Access Journals (Sweden)

    Eduard eGrinke

    2015-10-01

    Full Text Available Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments.

  20. Adaptive competitive learning neural networks

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2013-11-01

    Full Text Available In this paper, the adaptive competitive learning (ACL neural network algorithm is proposed. This neural network not only groups similar input feature vectors together but also determines the appropriate number of groups of these vectors. This algorithm uses a new proposed criterion referred to as the ACL criterion. This criterion evaluates different clustering structures produced by the ACL neural network for an input data set. Then, it selects the best clustering structure and the corresponding network architecture for this data set. The selected structure is composed of the minimum number of clusters that are compact and balanced in their sizes. The selected network architecture is efficient, in terms of its complexity, as it contains the minimum number of neurons. Synaptic weight vectors of these neurons represent well-separated, compact and balanced clusters in the input data set. The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters. Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

  1. Plasticity in memristive devices for Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Sylvain eSaïghi

    2015-03-01

    Full Text Available Memristive devices present a new device technology allowing for the realization of compact nonvolatile memories. Some of them are already in the process of industrialization. Additionally, they exhibit complex multilevel and plastic behaviors, which make them good candidates for the implementation of artificial synapses in neuromorphic engineering. However, memristive effects rely on diverse physical mechanisms, and their plastic behaviors differ strongly from one technology to another. Here, we present measurements performed on different memristive devices and the opportunities that they provide. We show that they can be used to implement different learning rules whose properties emerge directly from device physics: real time or accelerated operation, deterministic or stochastic behavior, long term or short term plasticity. We then discuss how such devices might be integrated into a complete architecture. These results highlight that there is no unique way to exploit memristive devices in neuromorphic systems. Understanding and embracing device physics is the key for their optimal use.

  2. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    Science.gov (United States)

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-04-30

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  3. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity.

    Science.gov (United States)

    Srinivasa, Narayan; Cho, Youngkwan

    2014-01-01

    A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns-both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.

  4. Perceptual learning and adult cortical plasticity.

    Science.gov (United States)

    Gilbert, Charles D; Li, Wu; Piech, Valentin

    2009-06-15

    The visual cortex retains the capacity for experience-dependent changes, or plasticity, of cortical function and cortical circuitry, throughout life. These changes constitute the mechanism of perceptual learning in normal visual experience and in recovery of function after CNS damage. Such plasticity can be seen at multiple stages in the visual pathway, including primary visual cortex. The manifestation of the functional changes associated with perceptual learning involve both long term modification of cortical circuits during the course of learning, and short term dynamics in the functional properties of cortical neurons. These dynamics are subject to top-down influences of attention, expectation and perceptual task. As a consequence, each cortical area is an adaptive processor, altering its function in accordance to immediate perceptual demands.

  5. Study of neural plasticity in braille reading visually challenged individuals

    Directory of Open Access Journals (Sweden)

    Nikhat Yasmeen, Mohammed Muslaiuddin Khalid, Abdul Raoof Omer siddique, Madhuri Taranikanti, Sanghamitra Panda, D.Usha Rani

    2014-03-01

    Full Text Available Background: Neural plasticity includes a wide range of adaptive changes due to loss or absence of a particular sense. Cortical mapping or reorganization is evolutionary conserved mechanism which involves either an unmasking of previously silent connections and/or sprouting of new neural elements. Aims & Objectives -To compare the Somatosensory evoked potentials (SSEPs wave form in normal and visually challenged individuals. Materials & Methods: 20 visually challenged males in the age group of 21 -31 yrs were included in the study along with 20 age & sex matched individuals. Subjects were screened for general physical health to rule out any medical disorder, tactile sensibility i.e., sensation of light touch, pressure, tactile localization & discrimination to rule out any delay in the peripheral conduction disorder. Somatosensory evoked potentials were recorded on Nicolet Viking select neuro diagnostic system version 10.0.The placement of electrodes & recording of potentials were done based on methodology in chiappa. Data was subjected to various statistical analyses using SPSS version 17.0 software. N20 & P25 latencies were shorter and amplitudes were larger in visually challenged individuals compared to age & sex matched individuals. Conclusions: In visually challenged individuals, decrease in latencies indicate greatly improved of information in the nervous system & increase in amplitudes indicate the extent and synchronization of neural network involved in processing of vision.

  6. Continual Learning through Evolvable Neural Turing Machines

    DEFF Research Database (Denmark)

    Lüders, Benno; Schläger, Mikkel; Risi, Sebastian

    2016-01-01

    Continual learning, i.e. the ability to sequentially learn tasks without catastrophic forgetting of previously learned ones, is an important open challenge in machine learning. In this paper we take a step in this direction by showing that the recently proposed Evolving Neural Turing Machine (ENTM...

  7. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot.

    Science.gov (United States)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  8. Learning language with the wrong neural scaffolding: The cost of neural commitment to sounds.

    Directory of Open Access Journals (Sweden)

    Amy Sue Finn

    2013-11-01

    Full Text Available Does tuning to one’s native language explain the sensitive period for language learning? We explore the idea that tuning to (or becoming more selective for the properties of one’s native-language could result in being less open (or plastic for tuning to the properties of a new language. To explore how this might lead to the sensitive period for grammar learning, we ask if tuning to an earlier-learned aspect of language (sound structure has an impact on the neural representation of a later-learned aspect (grammar. English-speaking adults learned one of two miniature artificial languages over 4 days in the lab. Compared to English, both languages had novel grammar, but only one was comprised of novel sounds. After learning a language, participants were scanned while judging the grammaticality of sentences. Judgments were performed for the newly learned language and English. Learners of the similar-sounds language recruited regions that overlapped more with English. Learners of the distinct-sounds language, however, recruited the Superior Temporal Gyrus (STG to a greater extent, which was coactive with the Inferior Frontal Gyrus (IFG. Across learners, recruitment of IFG (but not STG predicted both learning success in tests conducted prior to the scan and grammatical judgment ability during the scan. Data suggest that adults’ difficulty learning language, especially grammar, could be due, at least in part, to the neural commitments they have made to the lower level linguistic components of their native language.

  9. Learning language with the wrong neural scaffolding: the cost of neural commitment to sounds

    Science.gov (United States)

    Finn, Amy S.; Hudson Kam, Carla L.; Ettlinger, Marc; Vytlacil, Jason; D'Esposito, Mark

    2013-01-01

    Does tuning to one's native language explain the “sensitive period” for language learning? We explore the idea that tuning to (or becoming more selective for) the properties of one's native-language could result in being less open (or plastic) for tuning to the properties of a new language. To explore how this might lead to the sensitive period for grammar learning, we ask if tuning to an earlier-learned aspect of language (sound structure) has an impact on the neural representation of a later-learned aspect (grammar). English-speaking adults learned one of two miniature artificial languages (MALs) over 4 days in the lab. Compared to English, both languages had novel grammar, but only one was comprised of novel sounds. After learning a language, participants were scanned while judging the grammaticality of sentences. Judgments were performed for the newly learned language and English. Learners of the similar-sounds language recruited regions that overlapped more with English. Learners of the distinct-sounds language, however, recruited the Superior Temporal Gyrus (STG) to a greater extent, which was coactive with the Inferior Frontal Gyrus (IFG). Across learners, recruitment of IFG (but not STG) predicted both learning success in tests conducted prior to the scan and grammatical judgment ability during the scan. Data suggest that adults' difficulty learning language, especially grammar, could be due, at least in part, to the neural commitments they have made to the lower level linguistic components of their native language. PMID:24273497

  10. Windowed active sampling for reliable neural learning

    NARCIS (Netherlands)

    Barakova, E.I; Spaanenburg, L

    The composition of the example set has a major impact on the quality of neural learning. The popular approach is focused on extensive pre-processing to bridge the representation gap between process measurement and neural presentation. In contrast, windowed active sampling attempts to solve these

  11. Activity-regulated genes as mediators of neural circuit plasticity.

    Science.gov (United States)

    Leslie, Jennifer H; Nedivi, Elly

    2011-08-01

    Modifications of neuronal circuits allow the brain to adapt and change with experience. This plasticity manifests during development and throughout life, and can be remarkably long lasting. Evidence has linked activity-regulated gene expression to the long-term structural and electrophysiological adaptations that take place during developmental critical periods, learning and memory, and alterations to sensory map representations in the adult. In all these cases, the cellular response to neuronal activity integrates multiple tightly coordinated mechanisms to precisely orchestrate long-lasting, functional and structural changes in brain circuits. Experience-dependent plasticity is triggered when neuronal excitation activates cellular signaling pathways from the synapse to the nucleus that initiate new programs of gene expression. The protein products of activity-regulated genes then work via a diverse array of cellular mechanisms to modify neuronal functional properties. Synaptic strengthening or weakening can reweight existing circuit connections, while structural changes including synapse addition and elimination create new connections. Posttranscriptional regulatory mechanisms, often also dependent on activity, further modulate activity-regulated gene transcript and protein function. Thus, activity-regulated genes implement varied forms of structural and functional plasticity to fine-tune brain circuit wiring. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Interactions between Depression and Facilitation within Neural Networks: Updating the Dual-Process Theory of Plasticity

    Science.gov (United States)

    Prescott, Steven A.

    1998-01-01

    Repetitive stimulation often results in habituation of the elicited response. However, if the stimulus is sufficiently strong, habituation may be preceded by transient sensitization or even replaced by enduring sensitization. In 1970, Groves and Thompson formulated the dual-process theory of plasticity to explain these characteristic behavioral changes on the basis of competition between decremental plasticity (depression) and incremental plasticity (facilitation) occurring within the neural network. Data from both vertebrate and invertebrate systems are reviewed and indicate that the effects of depression and facilitation are not exclusively additive but, rather, that those processes interact in a complex manner. Serial ordering of induction of learning, in which a depressing locus precedes the modulatory system responsible for inducing facilitation, causes the facilitation to wane. The parallel and/or serial expression of depression and waning facilitation within the stimulus–response pathway culminates in the behavioral changes that characterize dual-process learning. A mathematical model is presented to formally express and extend understanding of the interactions between depression and facilitation. PMID:10489261

  13. Plasticity-related genes in brain development and amygdala-dependent learning.

    Science.gov (United States)

    Ehrlich, D E; Josselyn, S A

    2016-01-01

    Learning about motivationally important stimuli involves plasticity in the amygdala, a temporal lobe structure. Amygdala-dependent learning involves a growing number of plasticity-related signaling pathways also implicated in brain development, suggesting that learning-related signaling in juveniles may simultaneously influence development. Here, we review the pleiotropic functions in nervous system development and amygdala-dependent learning of a signaling pathway that includes brain-derived neurotrophic factor (BDNF), extracellular signaling-related kinases (ERKs) and cyclic AMP-response element binding protein (CREB). Using these canonical, plasticity-related genes as an example, we discuss the intersection of learning-related and developmental plasticity in the immature amygdala, when aversive and appetitive learning may influence the developmental trajectory of amygdala function. We propose that learning-dependent activation of BDNF, ERK and CREB signaling in the immature amygdala exaggerates and accelerates neural development, promoting amygdala excitability and environmental sensitivity later in life. © 2015 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  14. An Inquiry into the Neural Plasticity Underlying Everyday Actions

    Directory of Open Access Journals (Sweden)

    Garrett Tisdale

    2017-11-01

    Full Text Available How does the brain change with respect to how we live our daily lives? Modern studies on how specific actions affect the anatomy of the brain have shown that different actions shape the way the brain is oriented. While individual studies might point towards these effects occurring in daily actions, the concept that morphological changes occur throughout the numerous fields of neuroplasticity based on daily actions has yet to become a well established and discussed phenomena. It is the goal of this article to view a few fields of neuroplasticity to answer this overarching question and review brain imaging studies indicating such morphological changes associated with the fields of neuroplasticity and everyday actions. To achieve this goal, a systematic approach revolving around scholarly search engines was used to briefly explore each studied field of interest. In this article, the activities of music production, video game play, and sleep are analyzed indicating such morphological change. These activities show changes to the respective areas of the brain in which the tasks are processed with a trend arising from the amount of time spent performing each action. It is shown from these fields of study that this classification of relating everyday actions to morphological change through neural plasticity does hold validity with respect to experimental studies.

  15. Machine Learning Topological Invariants with Neural Networks

    Science.gov (United States)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  16. Learning-parameter adjustment in neural networks

    Science.gov (United States)

    Heskes, Tom M.; Kappen, Bert

    1992-06-01

    We present a learning-parameter adjustment algorithm, valid for a large class of learning rules in neural-network literature. The algorithm follows directly from a consideration of the statistics of the weights in the network. The characteristic behavior of the algorithm is calculated, both in a fixed and a changing environment. A simple example, Widrow-Hoff learning for statistical classification, serves as an illustration.

  17. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. The Radical Plasticity Thesis: How the Brain Learns to be Conscious.

    Science.gov (United States)

    Cleeremans, Axel

    2011-01-01

    In this paper, I explore the idea that consciousness is something that the brain learns to do rather than an intrinsic property of certain neural states and not others. Starting from the idea that neural activity is inherently unconscious, the question thus becomes: How does the brain learn to be conscious? I suggest that consciousness arises as a result of the brain's continuous attempts at predicting not only the consequences of its actions on the world and on other agents, but also the consequences of activity in one cerebral region on activity in other regions. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of meta-representations that characterize and qualify the target first-order representations. Such learned redescriptions, enriched by the emotional value associated with them, form the basis of conscious experience. Learning and plasticity are thus central to consciousness, to the extent that experiences only occur in experiencers that have learned to know they possess certain first-order states and that have learned to care more about certain states than about others. This is what I call the "Radical Plasticity Thesis." In a sense thus, this is the enactive perspective, but turned both inwards and (further) outwards. Consciousness involves "signal detection on the mind"; the conscious mind is the brain's (non-conceptual, implicit) theory about itself. I illustrate these ideas through neural network models that simulate the relationships between performance and awareness in different tasks.

  19. The Radical Plasticity Thesis: How the brain learns to be conscious

    Directory of Open Access Journals (Sweden)

    Axel eCleeremans

    2011-05-01

    Full Text Available In this paper, I explore the idea that consciousness is something that the brain learns to do rather than an intrinsic property of certain neural states and not others. Starting from the idea that neural activity is inherently unconscious, the question thus becomes: How does the brain learn to be conscious? I suggest that consciousness arises as a result of the brain's continuous attempts at predicting not only the consequences of its actions on the world and on other agents, but also the consequences of activity in one cerebral region on activity in other regions. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of meta-representations that characterise and qualify the target first-order representations. Such learned redescriptions, enriched by the emotional value associated with them, form the basis of conscious experience. Learning and plasticity are thus central to consciousness, to the extent that experiences only occur in experiencers that have learned to know they possess certain first-order states and that have learned to care more about certain states than about others. This is what I call the Radical Plasticity Thesis. In a sense thus, this is the enactive perspective, but turned both inwards and (further outwards. Consciousness involves signal detection on the mind; the mind is the brain's (non-conceptual, implicit theory about itself. I illustrate these ideas through neural network models that simulate the relationships between performance and awareness in different tasks.

  20. Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity.

    Science.gov (United States)

    Pecevski, Dejan; Maass, Wolfgang

    2016-01-01

    Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p (*) that generates the examples it receives. This holds even if p (*) contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference.

  1. Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity123

    Science.gov (United States)

    Pecevski, Dejan

    2016-01-01

    Abstract Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p* that generates the examples it receives. This holds even if p* contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference. PMID:27419214

  2. Cortical plasticity associated with Braille learning.

    Science.gov (United States)

    Hamilton, R H; Pascual-Leone, A

    1998-05-01

    Blind subjects who learn to read Braille must acquire the ability to extract spatial information from subtle tactile stimuli. In order to accomplish this, neuroplastic changes appear to take place. During Braille learning, the sensorimotor cortical area devoted to the representation of the reading finger enlarges. This enlargement follows a two-step process that can be demonstrated with transcranial magnetic stimulation mapping and suggests initial unmasking of existing connections and eventual establishment of more stable structural changes. In addition, Braille learning appears to be associated with the recruitment of parts of the occipital, formerly `visual', cortex (V1 and V2) for tactile information processing. In blind, proficient Braille readers, the occipital cortex can be shown not only to be associated with tactile Braille reading but also to be critical for reading accuracy. Recent studies suggest the possibility of applying non-invasive neurophysiological techniques to guide and improve functional outcomes of these plastic changes. Such interventions might provide a means of accelerating functional adjustment to blindness.

  3. Online neural monitoring of statistical learning.

    Science.gov (United States)

    Batterink, Laura J; Paller, Ken A

    2017-05-01

    The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the reaction time (RT) task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Structural Plasticity Denoises Responses and Improves Learning Speed

    Directory of Open Access Journals (Sweden)

    Robin Spiess

    2016-09-01

    Full Text Available Despite an abundance of computational models for learning of synaptic weights, there has been relatively little research on structural plasticity, i.e. the creation and elimination of synapses. Especially, it is not clear how structural plasticity works in concert with spike-timing-dependent plasticity (STDP and what advantages their combination offers.Here we present a fairly large-scale functional model that uses leaky integrate-and-fire neurons, STDP, homeostasis, recurrent connections, and structural plasticity to learn the input encoding, the relation between inputs, and to infer missing inputs. Using this model, we compare the error and the amount of noise in the network's responses with and without structural plasticity and the influence of structural plasticity on the learning speed of the network.Using structural plasticity during learning shows good results for learning the representation of input values, i.e. structural plasticity strongly reduces the noise of the response by preventing spikes with a high error.For inferring missing inputs we see similar results, with responses having less noise if the network was trained using structural plasticity.Additionally, using structural plasticity with pruning significantly decreased the time to learn weights suitable for inference.Presumably, this is due to the clearer signal containing less spikes that misrepresent the desired value. Therefore, this work shows that structural plasticity is not only able to improve upon the performance using STDP without structural plasticity but also speeds up learning.Additionally, it addresses the practical problem of limited resources for connectivity that is not only apparent in the mammalian neocortex but also in computer hardware or neuromorphic (brain-inspired hardware by efficiently pruning synapses without losing performance.

  5. Neural Plasticity Is Involved in Physiological Sleep, Depressive Sleep Disturbances, and Antidepressant Treatments

    Directory of Open Access Journals (Sweden)

    Meng-Qi Zhang

    2017-01-01

    Full Text Available Depression, which is characterized by a pervasive and persistent low mood and anhedonia, greatly impacts patients, their families, and society. The associated and recurring sleep disturbances further reduce patient’s quality of life. However, therapeutic sleep deprivation has been regarded as a rapid and robust antidepressant treatment for several decades, which suggests a complicated role of sleep in development of depression. Changes in neural plasticity are observed during physiological sleep, therapeutic sleep deprivation, and depression. This correlation might help us to understand better the mechanism underlying development of depression and the role of sleep. In this review, we first introduce the structure of sleep and the facilitated neural plasticity caused by physiological sleep. Then, we introduce sleep disturbances and changes in plasticity in patients with depression. Finally, the effects and mechanisms of antidepressants and therapeutic sleep deprivation on neural plasticity are discussed.

  6. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  7. A model of human motor sequence learning explains facilitation and interference effects based on spike-timing dependent plasticity.

    Directory of Open Access Journals (Sweden)

    Quan Wang

    2017-08-01

    Full Text Available The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP and synaptic normalization (SN. When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network's changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network's sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that

  8. [Involvement of aquaporin-4 in synaptic plasticity, learning and memory].

    Science.gov (United States)

    Wu, Xin; Gao, Jian-Feng

    2017-06-25

    Aquaporin-4 (AQP-4) is the predominant water channel in the central nervous system (CNS) and primarily expressed in astrocytes. Astrocytes have been generally believed to play important roles in regulating synaptic plasticity and information processing. However, the role of AQP-4 in regulating synaptic plasticity, learning and memory, cognitive function is only beginning to be investigated. It is well known that synaptic plasticity is the prime candidate for mediating of learning and memory. Long term potentiation (LTP) and long term depression (LTD) are two forms of synaptic plasticity, and they share some but not all the properties and mechanisms. Hippocampus is a part of limbic system that is particularly important in regulation of learning and memory. This article is to review some research progresses of the function of AQP-4 in synaptic plasticity, learning and memory, and propose the possible role of AQP-4 as a new target in the treatment of cognitive dysfunction.

  9. Learning drifting concepts with neural networks

    NARCIS (Netherlands)

    Biehl, Michael; Schwarze, Holm

    1993-01-01

    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using

  10. Short-term plasticity as a neural mechanism supporting memory and attentional functions.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Andermann, Mark L; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2011-11-08

    Based on behavioral studies, several relatively distinct perceptual and cognitive functions have been defined in cognitive psychology such as sensory memory, short-term memory, and selective attention. Here, we review evidence suggesting that some of these functions may be supported by shared underlying neuronal mechanisms. Specifically, we present, based on an integrative review of the literature, a hypothetical model wherein short-term plasticity, in the form of transient center-excitatory and surround-inhibitory modulations, constitutes a generic processing principle that supports sensory memory, short-term memory, involuntary attention, selective attention, and perceptual learning. In our model, the size and complexity of receptive fields/level of abstraction of neural representations, as well as the length of temporal receptive windows, increases as one steps up the cortical hierarchy. Consequently, the type of input (bottom-up vs. top down) and the level of cortical hierarchy that the inputs target, determine whether short-term plasticity supports purely sensory vs. semantic short-term memory or attentional functions. Furthermore, we suggest that rather than discrete memory systems, there are continuums of memory representations from short-lived sensory ones to more abstract longer-duration representations, such as those tapped by behavioral studies of short-term memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. A learning algorithm for oscillatory cellular neural networks.

    Science.gov (United States)

    Ho, C Y.; Kurokawa, H

    1999-07-01

    We present a cellular type oscillatory neural network for temporal segregation of stationary input patterns. The model comprises an array of locally connected neural oscillators with connections limited to a 4-connected neighborhood. The architecture is reminiscent of the well-known cellular neural network that consists of local connection for feature extraction. By means of a novel learning rule and an initialization scheme, global synchronization can be accomplished without incurring any erroneous synchrony among uncorrelated objects. Each oscillator comprises two mutually coupled neurons, and neurons share a piecewise-linear activation function characteristic. The dynamics of traditional oscillatory models is simplified by using only one plastic synapse, and the overall complexity for hardware implementation is reduced. Based on the connectedness of image segments, it is shown that global synchronization and desynchronization can be achieved by means of locally connected synapses, and this opens up a tremendous application potential for the proposed architecture. Furthermore, by using special grouping synapses it is demonstrated that temporal segregation of overlapping gray-level and color segments can also be achieved. Finally, simulation results show that the learning rule proposed circumvents the problem of component mismatches, and hence facilitates a large-scale integration.

  12. A novel analytical characterization for short-term plasticity parameters in spiking neural networks.

    Science.gov (United States)

    O'Brien, Michael J; Thibeault, Corey M; Srinivasa, Narayan

    2014-01-01

    Short-term plasticity (STP) is a phenomenon that widely occurs in the neocortex with implications for learning and memory. Based on a widely used STP model, we develop an analytical characterization of the STP parameter space to determine the nature of each synapse (facilitating, depressing, or both) in a spiking neural network based on presynaptic firing rate and the corresponding STP parameters. We demonstrate consistency with previous work by leveraging the power of our characterization to replicate the functional volumes that are integral for the previous network stabilization results. We then use our characterization to predict the precise transitional point from the facilitating regime to the depressing regime in a simulated synapse, suggesting in vitro experiments to verify the underlying STP model. We conclude the work by integrating our characterization into a framework for finding suitable STP parameters for self-sustaining random, asynchronous activity in a prescribed recurrent spiking neural network. The systematic process resulting from our analytical characterization improves the success rate of finding the requisite parameters for such networks by three orders of magnitude over a random search.

  13. The super-Turing computational power of plastic recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  14. On the Role of Neurogenesis and Neural Plasticity in the Evolution of Animal Personalities and Stress Coping Styles

    DEFF Research Database (Denmark)

    Overli, Oyvind; Sorensen, Christina

    2016-01-01

    and neurogenesis have received recent attention. This work reveals that brain cell proliferation and neurogenesis are associated with heritable variation in stress coping style, and they are also differentially affected by short- and long-term stress in a biphasic manner. Routine-dependent and inflexible behavior...... are conserved throughout the vertebrate subphylum, including factors affecting perception, learning, and memory of stimuli and events. Here we review conserved aspects of the contribution of neurogenesis and other aspects of neural plasticity to stress coping. In teleost fish, brain cell proliferation...

  15. Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.

    Science.gov (United States)

    Walter, Florian; Röhrbein, Florian; Knoll, Alois

    2015-12-01

    The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Learning of N-layers neural network

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2005-01-01

    Full Text Available In the last decade we can observe increasing number of applications based on the Artificial Intelligence that are designed to solve problems from different areas of human activity. The reason why there is so much interest in these technologies is that the classical way of solutions does not exist or these technologies are not suitable because of their robustness. They are often used in applications like Business Intelligence that enable to obtain useful information for high-quality decision-making and to increase competitive advantage.One of the most widespread tools for the Artificial Intelligence are the artificial neural networks. Their high advantage is relative simplicity and the possibility of self-learning based on set of pattern situations.For the learning phase is the most commonly used algorithm back-propagation error (BPE. The base of BPE is the method minima of error function representing the sum of squared errors on outputs of neural net, for all patterns of the learning set. However, while performing BPE and in the first usage, we can find out that it is necessary to complete the handling of the learning factor by suitable method. The stability of the learning process and the rate of convergence depend on the selected method. In the article there are derived two functions: one function for the learning process management by the relative great error function value and the second function when the value of error function approximates to global minimum.The aim of the article is to introduce the BPE algorithm in compact matrix form for multilayer neural networks, the derivation of the learning factor handling method and the presentation of the results.

  17. Learning-Related Changes in Adolescents' Neural Networks during Hypothesis-Generating and Hypothesis-Understanding Training

    Science.gov (United States)

    Lee, Jun-Ki; Kwon, Yongju

    2012-01-01

    Fourteen science high school students participated in this study, which investigated neural-network plasticity associated with hypothesis-generating and hypothesis-understanding in learning. The students were divided into two groups and participated in either hypothesis-generating or hypothesis-understanding type learning programs, which were…

  18. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.

    Science.gov (United States)

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

  19. Temporal-pattern learning in neural models

    CERN Document Server

    Genís, Carme Torras

    1985-01-01

    While the ability of animals to learn rhythms is an unquestionable fact, the underlying neurophysiological mechanisms are still no more than conjectures. This monograph explores the requirements of such mechanisms, reviews those previously proposed and postulates a new one based on a direct electric coding of stimulation frequencies. Experi­ mental support for the option taken is provided both at the single neuron and neural network levels. More specifically, the material presented divides naturally into four parts: a description of the experimental and theoretical framework where this work becomes meaningful (Chapter 2), a detailed specifica­ tion of the pacemaker neuron model proposed together with its valida­ tion through simulation (Chapter 3), an analytic study of the behavior of this model when submitted to rhythmic stimulation (Chapter 4) and a description of the neural network model proposed for learning, together with an analysis of the simulation results obtained when varying seve­ ral factors r...

  20. Learning in Neural Networks: VLSI Implementation Strategies

    Science.gov (United States)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  1. Multi-layer network utilizing rewarded spike time dependent plasticity to learn a foraging task.

    Directory of Open Access Journals (Sweden)

    Pavel Sanda

    2017-09-01

    Full Text Available Neural networks with a single plastic layer employing reward modulated spike time dependent plasticity (STDP are capable of learning simple foraging tasks. Here we demonstrate advanced pattern discrimination and continuous learning in a network of spiking neurons with multiple plastic layers. The network utilized both reward modulated and non-reward modulated STDP and implemented multiple mechanisms for homeostatic regulation of synaptic efficacy, including heterosynaptic plasticity, gain control, output balancing, activity normalization of rewarded STDP and hard limits on synaptic strength. We found that addition of a hidden layer of neurons employing non-rewarded STDP created neurons that responded to the specific combinations of inputs and thus performed basic classification of the input patterns. When combined with a following layer of neurons implementing rewarded STDP, the network was able to learn, despite the absence of labeled training data, discrimination between rewarding patterns and the patterns designated as punishing. Synaptic noise allowed for trial-and-error learning that helped to identify the goal-oriented strategies which were effective in task solving. The study predicts a critical set of properties of the spiking neuronal network with STDP that was sufficient to solve a complex foraging task involving pattern classification and decision making.

  2. Motor learning interference is proportional to occlusion of LTP-like plasticity.

    Science.gov (United States)

    Cantarero, Gabriela; Tang, Byron; O'Malley, Rebecca; Salas, Rachel; Celnik, Pablo

    2013-03-13

    Learning interference occurs when learning something new causes forgetting of an older memory (retrograde interference) or when learning a new task disrupts learning of a second subsequent task (anterograde interference). This phenomenon, described in cognitive, sensory, and motor domains, limits our ability to learn multiple tasks in close succession. It has been suggested that the source of interference is competition of neural resources, although the neuronal mechanisms are unknown. Learning induces long-term potentiation (LTP), which can ultimately limit the ability to induce further LTP, a phenomenon known as occlusion. In humans we quantified the magnitude of occlusion of anodal transcranial direct current stimulation-induced increased excitability after learning a skill task as an index of the amount of LTP-like plasticity used. We found that retention of a newly acquired skill, as reflected by performance in the second day of practice, is proportional to the magnitude of occlusion. Moreover, the degree of behavioral interference was correlated with the magnitude of occlusion. Individuals with larger occlusion after learning the first skill were (1) more resilient to retrograde interference and (2) experienced larger anterograde interference when training a second task, as expressed by decreased performance of the learned skill in the second day of practice. This effect was not observed if sufficient time elapsed between training the two skills and LTP-like occlusion was not present. These findings suggest competition of LTP-like plasticity is a factor that limits the ability to remember multiple tasks trained in close succession.

  3. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung

    2018-02-01

    Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  4. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning.

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung

    2018-02-01

    Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  5. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  6. The Radical Plasticity Thesis: How the Brain Learns to be Conscious

    Science.gov (United States)

    Cleeremans, Axel

    2011-01-01

    In this paper, I explore the idea that consciousness is something that the brain learns to do rather than an intrinsic property of certain neural states and not others. Starting from the idea that neural activity is inherently unconscious, the question thus becomes: How does the brain learn to be conscious? I suggest that consciousness arises as a result of the brain's continuous attempts at predicting not only the consequences of its actions on the world and on other agents, but also the consequences of activity in one cerebral region on activity in other regions. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of meta-representations that characterize and qualify the target first-order representations. Such learned redescriptions, enriched by the emotional value associated with them, form the basis of conscious experience. Learning and plasticity are thus central to consciousness, to the extent that experiences only occur in experiencers that have learned to know they possess certain first-order states and that have learned to care more about certain states than about others. This is what I call the “Radical Plasticity Thesis.” In a sense thus, this is the enactive perspective, but turned both inwards and (further) outwards. Consciousness involves “signal detection on the mind”; the conscious mind is the brain's (non-conceptual, implicit) theory about itself. I illustrate these ideas through neural network models that simulate the relationships between performance and awareness in different tasks. PMID:21687455

  7. Enhancing neural activity to drive respiratory plasticity following cervical spinal cord injury

    Science.gov (United States)

    Hormigo, Kristiina M.; Zholudeva, Lyandysha V.; Spruance, Victoria M.; Marchenko, Vitaliy; Cote, Marie-Pascale; Vinit, Stephane; Giszter, Simon; Bezdudnaya, Tatiana; Lane, Michael A.

    2016-01-01

    Cervical spinal cord injury (SCI) results in permanent life-altering sensorimotor deficits, among which impaired breathing is one of the most devastating and life-threatening. While clinical and experimental research has revealed that some spontaneous respiratory improvement (functional plasticity) can occur post-SCI, the extent of the recovery is limited and significant deficits persist. Thus, increasing effort is being made to develop therapies that harness and enhance this neuroplastic potential to optimize long-term recovery of breathing in injured individuals. One strategy with demonstrated therapeutic potential is the use of treatments that increase neural and muscular activity (e.g. locomotor training, neural and muscular stimulation) and promote plasticity. With a focus on respiratory function post-SCI, this review will discuss advances in the use of neural interfacing strategies and activity-based treatments, and highlights some recent results from our own research. PMID:27582085

  8. Optical implementation of neural learning algorithms based on cross-gain modulation in a semiconductor optical amplifier

    Science.gov (United States)

    Li, Qiang; Wang, Zhi; Le, Yansi; Sun, Chonghui; Song, Xiaojia; Wu, Chongqing

    2016-10-01

    Neuromorphic engineering has a wide range of applications in the fields of machine learning, pattern recognition, adaptive control, etc. Photonics, characterized by its high speed, wide bandwidth, low power consumption and massive parallelism, is an ideal way to realize ultrafast spiking neural networks (SNNs). Synaptic plasticity is believed to be critical for learning, memory and development in neural circuits. Experimental results have shown that changes of synapse are highly dependent on the relative timing of pre- and postsynaptic spikes. Synaptic plasticity in which presynaptic spikes preceding postsynaptic spikes results in strengthening, while the opposite timing results in weakening is called antisymmetric spike-timing-dependent plasticity (STDP) learning rule. And synaptic plasticity has the opposite effect under the same conditions is called antisymmetric anti-STDP learning rule. We proposed and experimentally demonstrated an optical implementation of neural learning algorithms, which can achieve both of antisymmetric STDP and anti-STDP learning rule, based on the cross-gain modulation (XGM) within a single semiconductor optical amplifier (SOA). The weight and height of the potentitation and depression window can be controlled by adjusting the injection current of the SOA, to mimic the biological antisymmetric STDP and anti-STDP learning rule more realistically. As the injection current increases, the width of depression and potentitation window decreases and height increases, due to the decreasing of recovery time and increasing of gain under a stronger injection current. Based on the demonstrated optical STDP circuit, ultrafast learning in optical SNNs can be realized.

  9. Learning Structure of Sensory Inputs with Synaptic Plasticity Leads to Interference

    Directory of Open Access Journals (Sweden)

    Joseph eChrol-Cannon

    2015-08-01

    Full Text Available Synaptic plasticity is often explored as a form of unsupervised adaptationin cortical microcircuits to learn the structure of complex sensoryinputs and thereby improve performance of classification and prediction. The question of whether the specific structure of the input patterns is encoded in the structure of neural networks has been largely neglected. Existing studies that have analyzed input-specific structural adaptation have used simplified, synthetic inputs in contrast to complex and noisy patterns found in real-world sensory data.In this work, input-specific structural changes are analyzed forthree empirically derived models of plasticity applied to three temporal sensory classification tasks that include complex, real-world visual and auditory data. Two forms of spike-timing dependent plasticity (STDP and the Bienenstock-Cooper-Munro (BCM plasticity rule are used to adapt the recurrent network structure during the training process before performance is tested on the pattern recognition tasks.It is shown that synaptic adaptation is highly sensitive to specific classes of input pattern. However, plasticity does not improve the performance on sensory pattern recognition tasks, partly due to synaptic interference between consecutively presented input samples. The changes in synaptic strength produced by one stimulus are reversed by thepresentation of another, thus largely preventing input-specific synaptic changes from being retained in the structure of the network.To solve the problem of interference, we suggest that models of plasticitybe extended to restrict neural activity and synaptic modification to a subset of the neural circuit, which is increasingly found to be the casein experimental neuroscience.

  10. Sensorimotor learning biases choice behavior: a learning neural field model for decision making.

    Directory of Open Access Journals (Sweden)

    Christian Klaes

    Full Text Available According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action

  11. Learning about the Types of Plastic Wastes: Effectiveness of Inquiry Learning Strategies

    Science.gov (United States)

    So, Wing-Mui Winnie; Cheng, Nga-Yee Irene; Chow, Cheuk-Fai; Zhan, Ying

    2016-01-01

    This study aims to examine the impacts of the inquiry learning strategies employed in a "Plastic Education Project" on primary students' knowledge, beliefs and intended behaviour in Hong Kong. Student questionnaires and a test on plastic types were adopted for data collection. Results reveal that the inquiry learning strategies…

  12. Temporal entrainment of cognitive functions: musical mnemonics induce brain plasticity and oscillatory synchrony in neural networks underlying memory.

    Science.gov (United States)

    Thaut, Michael H; Peterson, David A; McIntosh, Gerald C

    2005-12-01

    In a series of experiments, we have begun to investigate the effect of music as a mnemonic device on learning and memory and the underlying plasticity of oscillatory neural networks. We used verbal learning and memory tests (standardized word lists, AVLT) in conjunction with electroencephalographic analysis to determine differences between verbal learning in either a spoken or musical (verbal materials as song lyrics) modality. In healthy adults, learning in both the spoken and music condition was associated with significant increases in oscillatory synchrony across all frequency bands. A significant difference between the spoken and music condition emerged in the cortical topography of the learning-related synchronization. When using EEG measures as predictors during learning for subsequent successful memory recall, significantly increased coherence (phase-locked synchronization) within and between oscillatory brain networks emerged for music in alpha and gamma bands. In a similar study with multiple sclerosis patients, superior learning and memory was shown in the music condition when controlled for word order recall, and subjects were instructed to sing back the word lists. Also, the music condition was associated with a significant power increase in the low-alpha band in bilateral frontal networks, indicating increased neuronal synchronization. Musical learning may access compensatory pathways for memory functions during compromised PFC functions associated with learning and recall. Music learning may also confer a neurophysiological advantage through the stronger synchronization of the neuronal cell assemblies underlying verbal learning and memory. Collectively our data provide evidence that melodic-rhythmic templates as temporal structures in music may drive internal rhythm formation in recurrent cortical networks involved in learning and memory.

  13. Emergence of Slow Collective Oscillations in Neural Networks with Spike-Timing Dependent Plasticity

    Science.gov (United States)

    Mikkelsen, Kaare; Imparato, Alberto; Torcini, Alessandro

    2013-05-01

    The collective dynamics of excitatory pulse coupled neurons with spike-timing dependent plasticity is studied. The introduction of spike-timing dependent plasticity induces persistent irregular oscillations between strongly and weakly synchronized states, reminiscent of brain activity during slow-wave sleep. We explain the oscillations by a mechanism, the Sisyphus Effect, caused by a continuous feedback between the synaptic adjustments and the coherence in the neural firing. Due to this effect, the synaptic weights have oscillating equilibrium values, and this prevents the system from relaxing into a stationary macroscopic state.

  14. Cognitive-affective neural plasticity following active-controlled mindfulness intervention

    DEFF Research Database (Denmark)

    Allen, Micah Galen

    Mindfulness meditation is a set of attention-based, regulatory and self-inquiry training regimes. Although the impact of mindfulness meditation training (MT) on self-regulation is well established, the neural mechanisms supporting such plasticity are poorly understood. MT is thought to act through...... prefrontal cortex (mPFC), and right anterior insula during negative valence processing. Our findings highlight the importance of active control in MT research, indicate unique neural mechanisms for progressive stages of mindfulness training, and suggest that optimal application of MT may differ depending...

  15. Incremental learning of perceptual and conceptual representations and the puzzle of neural repetition suppression.

    Science.gov (United States)

    Gotts, Stephen J

    2016-08-01

    Incremental learning models of long-term perceptual and conceptual knowledge hold that neural representations are gradually acquired over many individual experiences via Hebbian-like activity-dependent synaptic plasticity across cortical connections of the brain. In such models, variation in task relevance of information, anatomic constraints, and the statistics of sensory inputs and motor outputs lead to qualitative alterations in the nature of representations that are acquired. Here, the proposal that behavioral repetition priming and neural repetition suppression effects are empirical markers of incremental learning in the cortex is discussed, and research results that both support and challenge this position are reviewed. Discussion is focused on a recent fMRI-adaptation study from our laboratory that shows decoupling of experience-dependent changes in neural tuning, priming, and repetition suppression, with representational changes that appear to work counter to the explicit task demands. Finally, critical experiments that may help to clarify and resolve current challenges are outlined.

  16. Upper Limb Immobilisation: A Neural Plasticity Model with Relevance to Poststroke Motor Rehabilitation

    OpenAIRE

    Furlan, Leonardo; Conforto, Adriana Bastos; Cohen, Leonardo G.; Sterr, Annette

    2016-01-01

    Advances in our understanding of the neural plasticity that occurs after hemiparetic stroke have contributed to the formulation of theories of poststroke motor recovery. These theories, in turn, have underpinned contemporary motor rehabilitation strategies for treating motor deficits after stroke, such as upper limb hemiparesis. However, a relative drawback has been that, in general, these strategies are most compatible with the recovery profiles of relatively high-functioning stroke survivor...

  17. Plastics in Our Environment: A Jigsaw Learning Activity

    Science.gov (United States)

    Hampton, Elaine; Wallace, Mary Ann; Lee, Wen-Yee

    2009-01-01

    In this lesson, a ready-to-teach cooperative reading activity, students learn about the effects of plastics in our environment, specifically that certain petrochemicals act as artificial estrogens and impact hormonal activities. Much of the content in this lesson was synthesized from recent medical research about the impact of xenoestrogens and…

  18. Cerebellar motor learning: when is cortical plasticity not enough?

    Directory of Open Access Journals (Sweden)

    John Porrill

    2007-10-01

    Full Text Available Classical Marr-Albus theories of cerebellar learning employ only cortical sites of plasticity. However, tests of these theories using adaptive calibration of the vestibulo-ocular reflex (VOR have indicated plasticity in both cerebellar cortex and the brainstem. To resolve this long-standing conflict, we attempted to identify the computational role of the brainstem site, by using an adaptive filter version of the cerebellar microcircuit to model VOR calibration for changes in the oculomotor plant. With only cortical plasticity, introducing a realistic delay in the retinal-slip error signal of 100 ms prevented learning at frequencies higher than 2.5 Hz, although the VOR itself is accurate up to at least 25 Hz. However, the introduction of an additional brainstem site of plasticity, driven by the correlation between cerebellar and vestibular inputs, overcame the 2.5 Hz limitation and allowed learning of accurate high-frequency gains. This "cortex-first" learning mechanism is consistent with a wide variety of evidence concerning the role of the flocculus in VOR calibration, and complements rather than replaces the previously proposed "brainstem-first" mechanism that operates when ocular tracking mechanisms are effective. These results (i describe a process whereby information originally learnt in one area of the brain (cerebellar cortex can be transferred and expressed in another (brainstem, and (ii indicate for the first time why a brainstem site of plasticity is actually required by Marr-Albus type models when high-frequency gains must be learned in the presence of error delay.

  19. Neural signals of vicarious extinction learning.

    Science.gov (United States)

    Golkar, Armita; Haaker, Jan; Selbing, Ida; Olsson, Andreas

    2016-10-01

    Social transmission of both threat and safety is ubiquitous, but little is known about the neural circuitry underlying vicarious safety learning. This is surprising given that these processes are critical to flexibly adapt to a changeable environment. To address how the expression of previously learned fears can be modified by the transmission of social information, two conditioned stimuli (CS + s) were paired with shock and the third was not. During extinction, we held constant the amount of direct, non-reinforced, exposure to the CSs (i.e. direct extinction), and critically varied whether another individual-acting as a demonstrator-experienced safety (CS + vic safety) or aversive reinforcement (CS + vic reinf). During extinction, ventromedial prefrontal cortex (vmPFC) responses to the CS + vic reinf increased but decreased to the CS + vic safety This pattern of vmPFC activity was reversed during a subsequent fear reinstatement test, suggesting a temporal shift in the involvement of the vmPFC. Moreover, only the CS + vic reinf association recovered. Our data suggest that vicarious extinction prevents the return of conditioned fear responses, and that this efficacy is reflected by diminished vmPFC involvement during extinction learning. The present findings may have important implications for understanding how social information influences the persistence of fear memories in individuals suffering from emotional disorders. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  1. Dynamic neural networking as a basis for plasticity in the control of heart rate.

    Science.gov (United States)

    Kember, G; Armour, J A; Zamir, M

    2013-01-21

    A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Inactivity-induced respiratory plasticity: Protecting the drive to breathe in disorders that reduce respiratory neural activity☆

    Science.gov (United States)

    Strey, K.A.; Baertsch, N.A.; Baker-Herman, T.L.

    2013-01-01

    Multiple forms of plasticity are activated following reduced respiratory neural activity. For example, in ventilated rats, a central neural apnea elicits a rebound increase in phrenic and hypoglossal burst amplitude upon resumption of respiratory neural activity, forms of plasticity called inactivity-induced phrenic and hypoglossal motor facilitation (iPMF and iHMF), respectively. Here, we provide a conceptual framework for plasticity following reduced respiratory neural activity to guide future investigations. We review mechanisms giving rise to iPMF and iHMF, present new data suggesting that inactivity-induced plasticity is observed in inspiratory intercostals (iIMF) and point out gaps in our knowledge. We then survey conditions relevant to human health characterized by reduced respiratory neural activity and discuss evidence that inactivity-induced plasticity is elicited during these conditions. Understanding the physiological impact and circumstances in which inactivity-induced respiratory plasticity is elicited may yield novel insights into the treatment of disorders characterized by reductions in respiratory neural activity. PMID:23816599

  3. Learning and adaptation: neural and behavioural mechanisms behind behaviour change

    Science.gov (United States)

    Lowe, Robert; Sandamirskaya, Yulia

    2018-01-01

    This special issue presents perspectives on learning and adaptation as they apply to a number of cognitive phenomena including pupil dilation in humans and attention in robots, natural language acquisition and production in embodied agents (robots), human-robot game play and social interaction, neural-dynamic modelling of active perception and neural-dynamic modelling of infant development in the Piagetian A-not-B task. The aim of the special issue, through its contributions, is to highlight some of the critical neural-dynamic and behavioural aspects of learning as it grounds adaptive responses in robotic- and neural-dynamic systems.

  4. Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network.

    Science.gov (United States)

    Del Papa, Bruno; Priesemann, Viola; Triesch, Jochen

    2017-01-01

    Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions - matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model's performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN's spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.

  5. Plastic

    International Nuclear Information System (INIS)

    Jeong Gi Hyeon

    1987-04-01

    This book deals with plastic, which includes introduction for plastic, chemistry of high polymers, polymerization, speciality and structure of a high molecule property of plastic, molding, thermosetting plastic, such as polyethylene, polyether, polyamide and polyvinyl acetyl, thermal plastic like phenolic resins, xylene resins, melamine resin, epoxy resin, alkyd resin and poly urethan resin, new plastic like ionomer and PPS resin, synthetic laminated tape and synthetic wood, mixed materials in plastic, reprocessing of waste plastic, polymer blend, test method for plastic materials and auxiliary materials of plastic.

  6. Filopodia: A Rapid Structural Plasticity Substrate for Fast Learning

    Directory of Open Access Journals (Sweden)

    Ahmet S. Ozcan

    2017-06-01

    Full Text Available Formation of new synapses between neurons is an essential mechanism for learning and encoding memories. The vast majority of excitatory synapses occur on dendritic spines, therefore, the growth dynamics of spines is strongly related to the plasticity timescales. Especially in the early stages of the developing brain, there is an abundant number of long, thin and motile protrusions (i.e., filopodia, which develop in timescales of seconds and minutes. Because of their unique morphology and motility, it has been suggested that filopodia can have a dual role in both spinogenesis and environmental sampling of potential axonal partners. I propose that filopodia can lower the threshold and reduce the time to form new dendritic spines and synapses, providing a substrate for fast learning. Based on this proposition, the functional role of filopodia during brain development is discussed in relation to learning and memory. Specifically, it is hypothesized that the postnatal brain starts with a single-stage memory system with filopodia playing a significant role in rapid structural plasticity along with the stability provided by the mushroom-shaped spines. Following the maturation of the hippocampus, this highly-plastic unitary system transitions to a two-stage memory system, which consists of a plastic temporary store and a long-term stable store. In alignment with these architectural changes, it is posited that after brain maturation, filopodia-based structural plasticity will be preserved in specific areas, which are involved in fast learning (e.g., hippocampus in relation to episodic memory. These propositions aim to introduce a unifying framework for a diversity of phenomena in the brain such as synaptogenesis, pruning and memory consolidation.

  7. Association of contextual cues with morphine reward increases neural and synaptic plasticity in the ventral hippocampus of rats

    NARCIS (Netherlands)

    Alvandi, M.S.; Bourmpoula, M.; Homberg, J.R.; Fathollahi, Y.

    2017-01-01

    Drug addiction is associated with aberrant memory and permanent functional changes in neural circuits. It is known that exposure to drugs like morphine is associated with positive emotional states and reward-related memory. However, the underlying mechanisms in terms of neural plasticity in the

  8. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    Science.gov (United States)

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  9. Neurometaplasticity: Glucoallostasis control of plasticity of the neural networks of error commission, detection, and correction modulates neuroplasticity to influence task precision

    Science.gov (United States)

    Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.

    2017-12-01

    The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.

  10. Biologically plausible learning in neural networks: a lesson from bacterial chemotaxis.

    Science.gov (United States)

    Shimansky, Yury P

    2009-12-01

    Learning processes in the brain are usually associated with plastic changes made to optimize the strength of connections between neurons. Although many details related to biophysical mechanisms of synaptic plasticity have been discovered, it is unclear how the concurrent performance of adaptive modifications in a huge number of spatial locations is organized to minimize a given objective function. Since direct experimental observation of even a relatively small subset of such changes is not feasible, computational modeling is an indispensable investigation tool for solving this problem. However, the conventional method of error back-propagation (EBP) employed for optimizing synaptic weights in artificial neural networks is not biologically plausible. This study based on computational experiments demonstrated that such optimization can be performed rather efficiently using the same general method that bacteria employ for moving closer to an attractant or away from a repellent. With regard to neural network optimization, this method consists of regulating the probability of an abrupt change in the direction of synaptic weight modification according to the temporal gradient of the objective function. Neural networks utilizing this method (regulation of modification probability, RMP) can be viewed as analogous to swimming in the multidimensional space of their parameters in the flow of biochemical agents carrying information about the optimality criterion. The efficiency of RMP is comparable to that of EBP, while RMP has several important advantages. Since the biological plausibility of RMP is beyond a reasonable doubt, the RMP concept provides a constructive framework for the experimental analysis of learning in natural neural networks.

  11. Boltzmann learning of parameters in cellular neural networks

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    1992-01-01

    The use of Bayesian methods to design cellular neural networks for signal processing tasks and the Boltzmann machine learning rule for parameter estimation is discussed. The learning rule can be used for models with hidden units, or for completely unsupervised learning. The latter is exemplified...

  12. Music mnemonics aid Verbal Memory and Induce Learning - Related Brain Plasticity in Multiple Sclerosis.

    Science.gov (United States)

    Thaut, Michael H; Peterson, David A; McIntosh, Gerald C; Hoemberg, Volker

    2014-01-01

    Recent research on music and brain function has suggested that the temporal pattern structure in music and rhythm can enhance cognitive functions. To further elucidate this question specifically for memory, we investigated if a musical template can enhance verbal learning in patients with multiple sclerosis (MS) and if music-assisted learning will also influence short-term, system-level brain plasticity. We measured systems-level brain activity with oscillatory network synchronization during music-assisted learning. Specifically, we measured the spectral power of 128-channel electroencephalogram (EEG) in alpha and beta frequency bands in 54 patients with MS. The study sample was randomly divided into two groups, either hearing a spoken or a musical (sung) presentation of Rey's auditory verbal learning test. We defined the "learning-related synchronization" (LRS) as the percent change in EEG spectral power from the first time the word was presented to the average of the subsequent word encoding trials. LRS differed significantly between the music and the spoken conditions in low alpha and upper beta bands. Patients in the music condition showed overall better word memory and better word order memory and stronger bilateral frontal alpha LRS than patients in the spoken condition. The evidence suggests that a musical mnemonic recruits stronger oscillatory network synchronization in prefrontal areas in MS patients during word learning. It is suggested that the temporal structure implicit in musical stimuli enhances "deep encoding" during verbal learning and sharpens the timing of neural dynamics in brain networks degraded by demyelination in MS.

  13. Dynamic learning and memory, synaptic plasticity and neurogenesis: an update

    Czech Academy of Sciences Publication Activity Database

    Stuchlík, Aleš

    2014-01-01

    Roč. 8, APR 1 (2014), s. 106 ISSN 1662-5153 R&D Projects: GA ČR(CZ) GA14-03627S Grant - others:Rada Programu interní podpory projektů mezinárodní spolupráce AV ČR(CZ) M200111204 Institutional support: RVO:67985823 Keywords : learning * memory * synaptic plasticity * neurogenesis Subject RIV: FH - Neurology Impact factor: 3.270, year: 2014

  14. Using machine learning, neural networks and statistics to predict bankruptcy

    NARCIS (Netherlands)

    Pompe, P.P.M.; Feelders, A.J.; Feelders, A.J.

    1997-01-01

    Recent literature strongly suggests that machine learning approaches to classification outperform "classical" statistical methods. We make a comparison between the performance of linear discriminant analysis, classification trees, and neural networks in predicting corporate bankruptcy. Linear

  15. Synaptic plasticity, neural circuits, and the emerging role of altered short-term information processing in schizophrenia

    Science.gov (United States)

    Crabtree, Gregg W.; Gogos, Joseph A.

    2014-01-01

    Synaptic plasticity alters the strength of information flow between presynaptic and postsynaptic neurons and thus modifies the likelihood that action potentials in a presynaptic neuron will lead to an action potential in a postsynaptic neuron. As such, synaptic plasticity and pathological changes in synaptic plasticity impact the synaptic computation which controls the information flow through the neural microcircuits responsible for the complex information processing necessary to drive adaptive behaviors. As current theories of neuropsychiatric disease suggest that distinct dysfunctions in neural circuit performance may critically underlie the unique symptoms of these diseases, pathological alterations in synaptic plasticity mechanisms may be fundamental to the disease process. Here we consider mechanisms of both short-term and long-term plasticity of synaptic transmission and their possible roles in information processing by neural microcircuits in both health and disease. As paradigms of neuropsychiatric diseases with strongly implicated risk genes, we discuss the findings in schizophrenia and autism and consider the alterations in synaptic plasticity and network function observed in both human studies and genetic mouse models of these diseases. Together these studies have begun to point toward a likely dominant role of short-term synaptic plasticity alterations in schizophrenia while dysfunction in autism spectrum disorders (ASDs) may be due to a combination of both short-term and long-term synaptic plasticity alterations. PMID:25505409

  16. Neural plasticity in functional and anatomical MRI studies of children with Tourette syndrome

    DEFF Research Database (Denmark)

    Eichele, Heike; Plessen, Kerstin J

    2012-01-01

    Background: Tourette syndrome (TS) is a neuropsychiatric disorder with childhood onset characterized by chronic motor and vocal tics. The typical clinical course of an attenuation of symptoms during adolescence in parallel with the emerging self-regulatory control during development suggests...... that plastic processes may play an important role in the development of tic symptoms. Methods: We conducted a systematic search to identify existing imaging studies (both anatomical and functional magnetic resonance imaging [fMRI]) in young persons under the age of 19 years with TS. Results: The final search...... compensatory pathways in children with TS. Along with alterations in regions putatively representing the origin of tics, deviations in several other regions most likely represent an activity-dependent neural plasticity that help to modulate tic severity, such as the prefrontal cortex, but also in the corpus...

  17. Sparing of descending axons rescues interneuron plasticity in the lumbar cord to allow adaptive learning after thoracic spinal cord injury

    Directory of Open Access Journals (Sweden)

    Christopher Nelson Hansen

    2016-03-01

    Full Text Available This study evaluated the role of spared axons on structural and behavioral neuroplasticity in the lumbar enlargement after a thoracic spinal cord injury (SCI. Previous work has demonstrated that recovery in the presence of spared axons after an incomplete lesion increases behavioral output after a subsequent complete spinal cord transection (TX. This suggests that spared axons direct adaptive changes in below-level neuronal networks of the lumbar cord. In response to spared fibers, we postulate that lumbar neuron networks support behavioral gains by preventing aberrant plasticity. As such, the present study measured histological and functional changes in the isolated lumbar cord after complete TX or incomplete contusion (SCI. To measure functional plasticity in the lumbar cord, we used an established instrumental learning paradigm. In this paradigm, neural circuits within isolated lumbar segments demonstrate learning by an increase in flexion duration that reduces exposure to a noxious leg shock. We employed this model using a proof-of-principle design to evaluate the role of sparing on lumbar learning and plasticity early (7 days or late (42 days after midthoracic SCI in a rodent model. Early after SCI or TX at 7d, spinal learning was unattainable regardless of whether the animal recovered with or without axonal substrate. Failed learning occurred alongside measures of cell soma atrophy and aberrant dendritic spine expression within interneuron populations responsible for sensorimotor integration and learning. Alternatively, exposure of the lumbar cord to a small amount of spared axons for 6 weeks produced near-normal learning late after SCI. This coincided with greater cell soma volume and fewer aberrant dendritic spines on interneurons. Thus, an opportunity to influence activity-based learning in locomotor networks depends on spared axons limiting maladaptive plasticity. Together, this work identifies a time dependent interaction between

  18. A decision-making model based on a spiking neural circuit and synaptic plasticity.

    Science.gov (United States)

    Wei, Hui; Bu, Yijie; Dai, Dawei

    2017-10-01

    To adapt to the environment and survive, most animals can control their behaviors by making decisions. The process of decision-making and responding according to cues in the environment is stable, sustainable, and learnable. Understanding how behaviors are regulated by neural circuits and the encoding and decoding mechanisms from stimuli to responses are important goals in neuroscience. From results observed in Drosophila experiments, the underlying decision-making process is discussed, and a neural circuit that implements a two-choice decision-making model is proposed to explain and reproduce the observations. Compared with previous two-choice decision making models, our model uses synaptic plasticity to explain changes in decision output given the same environment. Moreover, biological meanings of parameters of our decision-making model are discussed. In this paper, we explain at the micro-level (i.e., neurons and synapses) how observable decision-making behavior at the macro-level is acquired and achieved.

  19. Embedding responses in spontaneous neural activity shaped through sequential learning.

    Directory of Open Access Journals (Sweden)

    Tomoki Kurikawa

    Full Text Available Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, "memories-as-bifurcations," that differs from the traditional "memories-as-attractors" viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in

  20. Deep Learning Neural Networks in Cybersecurity - Managing Malware with AI

    OpenAIRE

    Rayle, Keith

    2017-01-01

    There’s a lot of talk about the benefits of deep learning (neural networks) and how it’s the new electricity that will power us into the future. Medical diagnosis, computer vision and speech recognition are all examples of use-cases where neural networks are being applied in our everyday business environment. This begs the question…what are the uses of neural-network applications for cyber security? How does the AI process work when applying neural networks to detect malicious software bombar...

  1. Do Convolutional Neural Networks Learn Class Hierarchy?

    Science.gov (United States)

    Bilal, Alsallakh; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2018-01-01

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  2. Neural plasticity in functional and anatomical MRI studies of children with Tourette syndrome.

    Science.gov (United States)

    Eichele, Heike; Plessen, Kerstin J

    2013-01-01

    Tourette syndrome (TS) is a neuropsychiatric disorder with childhood onset characterized by chronic motor and vocal tics. The typical clinical course of an attenuation of symptoms during adolescence in parallel with the emerging self-regulatory control during development suggests that plastic processes may play an important role in the development of tic symptoms. We conducted a systematic search to identify existing imaging studies (both anatomical and functional magnetic resonance imaging [fMRI]) in young persons under the age of 19 years with TS. The final search resulted in 13 original studies, which were reviewed with a focus on findings suggesting adaptive processes (using fMRI) and plasticity (using anatomical MRI). Differences in brain activation compared to healthy controls during tasks that require overriding of prepotent responses help to understand compensatory pathways in children with TS. Along with alterations in regions putatively representing the origin of tics, deviations in several other regions most likely represent an activity-dependent neural plasticity that help to modulate tic severity, such as the prefrontal cortex, but also in the corpus callosum and the limbic system. Factors that potentially influence the development of adaptive changes in the brain of children with TS are age, comorbidity with other developmental disorders, medication use, IQ along with study-design or MRI techniques for acquisition, and analysis of data. The most prominent limitation of all studies is their cross-sectional design. Longitudinal studies extending to younger age groups and to children at risk for developing TS hopefully will confirm findings of neural plasticity in future investigations.

  3. Changed Synaptic Plasticity in Neural Circuits of Depressive-Like and Escitalopram-Treated Rats

    Science.gov (United States)

    Li, Xiao-Li; Yuan, Yong-Gui; Xu, Hua; Wu, Di; Gong, Wei-Gang; Geng, Lei-Yu; Wu, Fang-Fang; Tang, Hao; Xu, Lin

    2015-01-01

    Background: Although progress has been made in the detection and characterization of neural plasticity in depression, it has not been fully understood in individual synaptic changes in the neural circuits under chronic stress and antidepressant treatment. Methods: Using electron microscopy and Western-blot analyses, the present study quantitatively examined the changes in the Gray’s Type I synaptic ultrastructures and the expression of synapse-associated proteins in the key brain regions of rats’ depressive-related neural circuit after chronic unpredicted mild stress and/or escitalopram administration. Meanwhile, their depressive behaviors were also determined by several tests. Results: The Type I synapses underwent considerable remodeling after chronic unpredicted mild stress, which resulted in the changed width of the synaptic cleft, length of the active zone, postsynaptic density thickness, and/or synaptic curvature in the subregions of medial prefrontal cortex and hippocampus, as well as the basolateral amygdaloid nucleus of the amygdala, accompanied by changed expression of several synapse-associated proteins. Chronic escitalopram administration significantly changed the above alternations in the chronic unpredicted mild stress rats but had little effect on normal controls. Also, there was a positive correlation between the locomotor activity and the maximal synaptic postsynaptic density thickness in the stratum radiatum of the Cornu Ammonis 1 region and a negative correlation between the sucrose preference and the length of the active zone in the basolateral amygdaloid nucleus region in chronic unpredicted mild stress rats. Conclusion: These findings strongly indicate that chronic stress and escitalopram can alter synaptic plasticity in the neural circuits, and the remodeled synaptic ultrastructure was correlated with the rats’ depressive behaviors, suggesting a therapeutic target for further exploration. PMID:25899067

  4. Neural Behavior Chain Learning of Mobile Robot Actions

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2012-01-01

    Full Text Available This paper presents a visual/motor behavior learning approach, based on neural networks. We propose Behavior Chain Model (BCM in order to create a way of behavior learning. Our behavior-based system evolution task is a mobile robot detecting a target and driving/acting towards it. First, the mapping relations between the image feature domain of the object and the robot action domain are derived. Second, a multilayer neural network for offline learning of the mapping relations is used. This learning structure through neural network training process represents a connection between the visual perceptions and motor sequence of actions in order to grip a target. Last, using behavior learning through a noticed action chain, we can predict mobile robot behavior for a variety of similar tasks in similar environment. Prediction results suggest that the methodology is adequate and could be recognized as an idea for designing different mobile robot behaviour assistance.

  5. The neural circuit basis of learning

    Science.gov (United States)

    Patrick, Kaifosh William John

    The astounding capacity for learning ranks among the nervous system's most impressive features. This thesis comprises studies employing varied approaches to improve understanding, at the level of neural circuits, of the brain's capacity for learning. The first part of the thesis contains investigations of hippocampal circuitry -- both theoretical work and experimental work in the mouse Mus musculus -- as a model system for declarative memory. To begin, Chapter 2 presents a theory of hippocampal memory storage and retrieval that reflects nonlinear dendritic processing within hippocampal pyramidal neurons. As a prelude to the experimental work that comprises the remainder of this part, Chapter 3 describes an open source software platform that we have developed for analysis of data acquired with in vivo Ca2+ imaging, the main experimental technique used throughout the remainder of this part of the thesis. As a first application of this technique, Chapter 4 characterizes the content of signaling at synapses between GABAergic neurons of the medial septum and interneurons in stratum oriens of hippocampal area CA1. Chapter 5 then combines these techniques with optogenetic, pharmacogenetic, and pharmacological manipulations to uncover inhibitory circuit mechanisms underlying fear learning. The second part of this thesis focuses on the cerebellum-like electrosensory lobe in the weakly electric mormyrid fish Gnathonemus petersii, as a model system for non-declarative memory. In Chapter 6, we study how short-duration EOD motor commands are recoded into a complex temporal basis in the granule cell layer, which can be used to cancel Purkinje-like cell firing to the longer duration and temporally varying EOD-driven sensory responses. In Chapter 7, we consider not only the temporal aspects of the granule cell code, but also the encoding of body position provided from proprioceptive and efference copy sources. Together these studies clarify how the cerebellum-like circuitry of the

  6. The neural basis of implicit learning and memory: a review of neuropsychological and neuroimaging research.

    Science.gov (United States)

    Reber, Paul J

    2013-08-01

    Memory systems research has typically described the different types of long-term memory in the brain as either declarative versus non-declarative or implicit versus explicit. These descriptions reflect the difference between declarative, conscious, and explicit memory that is dependent on the medial temporal lobe (MTL) memory system, and all other expressions of learning and memory. The other type of memory is generally defined by an absence: either the lack of dependence on the MTL memory system (nondeclarative) or the lack of conscious awareness of the information acquired (implicit). However, definition by absence is inherently underspecified and leaves open questions of how this type of memory operates, its neural basis, and how it differs from explicit, declarative memory. Drawing on a variety of studies of implicit learning that have attempted to identify the neural correlates of implicit learning using functional neuroimaging and neuropsychology, a theory of implicit memory is presented that describes it as a form of general plasticity within processing networks that adaptively improve function via experience. Under this model, implicit memory will not appear as a single, coherent, alternative memory system but will instead be manifested as a principle of improvement from experience based on widespread mechanisms of cortical plasticity. The implications of this characterization for understanding the role of implicit learning in complex cognitive processes and the effects of interactions between types of memory will be discussed for examples within and outside the psychology laboratory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Learning in neural networks based on a generalized fluctuation theorem

    Science.gov (United States)

    Hayakawa, Takashi; Aoyagi, Toshio

    2015-11-01

    Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universal laws describing the fluctuation they exhibit and the information they possess have recently been discovered. These laws are termed fluctuation theorems. In the present study, we formulate a theory of learning in neural networks bidirectionally interacting with environments based on the principle of information maximization. Our formulation begins with the introduction of a generalized fluctuation theorem, employing an interpretation appropriate for the present application, which differs from the original thermodynamic interpretation. We analytically and numerically demonstrate that the learning mechanism presented in our theory allows neural networks to efficiently explore their environments and optimally encode information about them.

  8. Biologically based neural circuit modelling for the study of fear learning and extinction

    Science.gov (United States)

    Nair, Satish S.; Paré, Denis; Vicentic, Aleksandra

    2016-11-01

    The neuronal systems that promote protective defensive behaviours have been studied extensively using Pavlovian conditioning. In this paradigm, an initially neutral-conditioned stimulus is paired with an aversive unconditioned stimulus leading the subjects to display behavioural signs of fear. Decades of research into the neural bases of this simple behavioural paradigm uncovered that the amygdala, a complex structure comprised of several interconnected nuclei, is an essential part of the neural circuits required for the acquisition, consolidation and expression of fear memory. However, emerging evidence from the confluence of electrophysiological, tract tracing, imaging, molecular, optogenetic and chemogenetic methodologies, reveals that fear learning is mediated by multiple connections between several amygdala nuclei and their distributed targets, dynamical changes in plasticity in local circuit elements as well as neuromodulatory mechanisms that promote synaptic plasticity. To uncover these complex relations and analyse multi-modal data sets acquired from these studies, we argue that biologically realistic computational modelling, in conjunction with experiments, offers an opportunity to advance our understanding of the neural circuit mechanisms of fear learning and to address how their dysfunction may lead to maladaptive fear responses in mental disorders.

  9. Upper Limb Immobilisation: A Neural Plasticity Model with Relevance to Poststroke Motor Rehabilitation

    Directory of Open Access Journals (Sweden)

    Leonardo Furlan

    2016-01-01

    Full Text Available Advances in our understanding of the neural plasticity that occurs after hemiparetic stroke have contributed to the formulation of theories of poststroke motor recovery. These theories, in turn, have underpinned contemporary motor rehabilitation strategies for treating motor deficits after stroke, such as upper limb hemiparesis. However, a relative drawback has been that, in general, these strategies are most compatible with the recovery profiles of relatively high-functioning stroke survivors and therefore do not easily translate into benefit to those individuals sustaining low-functioning upper limb hemiparesis, who otherwise have poorer residual function. For these individuals, alternative motor rehabilitation strategies are currently needed. In this paper, we will review upper limb immobilisation studies that have been conducted with healthy adult humans and animals. Then, we will discuss how the findings from these studies could inspire the creation of a neural plasticity model that is likely to be of particular relevance to the context of motor rehabilitation after stroke. For instance, as will be elaborated, such model could contribute to the development of alternative motor rehabilitation strategies for treating poststroke upper limb hemiparesis. The implications of the findings from those immobilisation studies for contemporary motor rehabilitation strategies will also be discussed and perspectives for future research in this arena will be provided as well.

  10. Pushing the Limits: Cognitive, Affective, & Neural Plasticity Revealed by an Intensive Multifaceted Intervention

    Directory of Open Access Journals (Sweden)

    Michael David Mrazek

    2016-03-01

    Full Text Available Scientific understanding of how much the adult brain can be shaped by experience requires examination of how multiple influences combine to elicit cognitive, affective, and neural plasticity. Using an intensive multifaceted intervention, we discovered that substantial and enduring improvements can occur in parallel across multiple cognitive and neuroimaging measures in healthy young adults. The intervention elicited substantial improvements in physical health, working memory, standardized test performance, mood, self-esteem, self-efficacy, mindfulness, and life satisfaction. Improvements in mindfulness were associated with increased degree centrality of the insula, greater functional connectivity between insula and somatosensory cortex, and reduced functional connectivity between posterior cingulate cortex and somatosensory cortex. Improvements in working memory and reading comprehension were associated with increased degree centrality of a region within the middle temporal gyrus that was extensively and predominately integrated with the executive control network. The scope and magnitude of the observed improvements represent the most extensive demonstration to date of the considerable human capacity for change. These findings point to higher limits for rapid and concurrent cognitive, affective, and neural plasticity than is widely assumed.

  11. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Alireza Alemi

    2015-08-01

    Full Text Available Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the

  12. Neural Correlates of High Performance in Foreign Language Vocabulary Learning

    Science.gov (United States)

    Macedonia, Manuela; Muller, Karsten; Friederici, Angela D.

    2010-01-01

    Learning vocabulary in a foreign language is a laborious task which people perform with varying levels of success. Here, we investigated the neural underpinning of high performance on this task. In a within-subjects paradigm, participants learned 92 vocabulary items under two multimodal conditions: one condition paired novel words with iconic…

  13. Neural Plasticity and Proliferation in the Generation of Antidepressant Effects: Hippocampal Implication

    Directory of Open Access Journals (Sweden)

    Fuencisla Pilar-Cuéllar

    2013-01-01

    Full Text Available It is widely accepted that changes underlying depression and antidepressant-like effects involve not only alterations in the levels of neurotransmitters as monoamines and their receptors in the brain, but also structural and functional changes far beyond. During the last two decades, emerging theories are providing new explanations about the neurobiology of depression and the mechanism of action of antidepressant strategies based on cellular changes at the CNS level. The neurotrophic/plasticity hypothesis of depression, proposed more than a decade ago, is now supported by multiple basic and clinical studies focused on the role of intracellular-signalling cascades that govern neural proliferation and plasticity. Herein, we review the state-of-the-art of the changes in these signalling pathways which appear to underlie both depressive disorders and antidepressant actions. We will especially focus on the hippocampal cellularity and plasticity modulation by serotonin, trophic factors as brain-derived neurotrophic factor (BDNF, and vascular endothelial growth factor (VEGF through intracellular signalling pathways—cAMP, Wnt/β-catenin, and mTOR. Connecting the classic monoaminergic hypothesis with proliferation/neuroplasticity-related evidence is an appealing and comprehensive attempt for improving our knowledge about the neurobiological events leading to depression and associated to antidepressant therapies.

  14. Vicarious neural processing of outcomes during observational learning.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on the ability to map the actions of others onto our own, process outcomes, and combine these sources of information. Here, we combined newly developed experimental tasks and functional magnetic resonance imaging (fMRI to investigate the neural mechanisms that govern such observational learning. Results show that the neural systems involved in individual trial-and-error learning and in action observation and execution both participate in observational learning. In addition, we identified brain areas that specifically activate for others' incorrect outcomes during learning in the posterior medial frontal cortex (pMFC, the anterior insula and the posterior superior temporal sulcus (pSTS.

  15. Vicarious neural processing of outcomes during observational learning.

    Science.gov (United States)

    Monfardini, Elisabetta; Gazzola, Valeria; Boussaoud, Driss; Brovelli, Andrea; Keysers, Christian; Wicker, Bruno

    2013-01-01

    Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on the ability to map the actions of others onto our own, process outcomes, and combine these sources of information. Here, we combined newly developed experimental tasks and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms that govern such observational learning. Results show that the neural systems involved in individual trial-and-error learning and in action observation and execution both participate in observational learning. In addition, we identified brain areas that specifically activate for others' incorrect outcomes during learning in the posterior medial frontal cortex (pMFC), the anterior insula and the posterior superior temporal sulcus (pSTS).

  16. Do neural nets learn statistical laws behind natural language?

    Directory of Open Access Journals (Sweden)

    Shuntaro Takahashi

    Full Text Available The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.

  17. Neural Monkey: An Open-source Tool for Sequence Learning

    Directory of Open Access Journals (Sweden)

    Helcl Jindřich

    2017-04-01

    Full Text Available In this paper, we announce the development of Neural Monkey – an open-source neural machine translation (NMT and general sequence-to-sequence learning system built over the TensorFlow machine learning library. The system provides a high-level API tailored for fast prototyping of complex architectures with multiple sequence encoders and decoders. Models’ overall architecture is specified in easy-to-read configuration files. The long-term goal of the Neural Monkey project is to create and maintain a growing collection of implementations of recently proposed components or methods, and therefore it is designed to be easily extensible. Trained models can be deployed either for batch data processing or as a web service. In the presented paper, we describe the design of the system and introduce the reader to running experiments using Neural Monkey.

  18. Fastest learning in small-world neural networks

    International Nuclear Information System (INIS)

    Simard, D.; Nadeau, L.; Kroeger, H.

    2005-01-01

    We investigate supervised learning in neural networks. We consider a multi-layered feed-forward network with back propagation. We find that the network of small-world connectivity reduces the learning error and learning time when compared to the networks of regular or random connectivity. Our study has potential applications in the domain of data-mining, image processing, speech recognition, and pattern recognition

  19. Development switch in neural circuitry underlying odor-malaise learning.

    Science.gov (United States)

    Shionoya, Kiseko; Moriceau, Stephanie; Lunday, Lauren; Miner, Cathrine; Roth, Tania L; Sullivan, Regina M

    2006-01-01

    Fetal and infant rats can learn to avoid odors paired with illness before development of brain areas supporting this learning in adults, suggesting an alternate learning circuit. Here we begin to document the transition from the infant to adult neural circuit underlying odor-malaise avoidance learning using LiCl (0.3 M; 1% of body weight, ip) and a 30-min peppermint-odor exposure. Conditioning groups included: Paired odor-LiCl, Paired odor-LiCl-Nursing, LiCl, and odor-saline. Results showed that Paired LiCl-odor conditioning induced a learned odor aversion in postnatal day (PN) 7, 12, and 23 pups. Odor-LiCl Paired Nursing induced a learned odor preference in PN7 and PN12 pups but blocked learning in PN23 pups. 14C 2-deoxyglucose (2-DG) autoradiography indicated enhanced olfactory bulb activity in PN7 and PN12 pups with odor preference and avoidance learning. The odor aversion in weanling aged (PN23) pups resulted in enhanced amygdala activity in Paired odor-LiCl pups, but not if they were nursing. Thus, the neural circuit supporting malaise-induced aversions changes over development, indicating that similar infant and adult-learned behaviors may have distinct neural circuits.

  20. Ischemic long-term-potentiation (iLTP: perspectives to set the threshold of neural plasticity toward therapy

    Directory of Open Access Journals (Sweden)

    Maximilian Lenz

    2015-01-01

    Full Text Available The precise role of neural plasticity under pathological conditions remains not well understood. It appears to be well accepted, however, that changes in the ability of neurons to express plasticity accompany neurological diseases. Here, we discuss recent experimental evidence, which suggests that synaptic plasticity induced by a pathological stimulus, i.e., ischemic long-term-potentiation (iLTP of excitatory synapses, could play an important role for post-stroke recovery by influencing the post-lesional reorganization of surviving neuronal networks.

  1. Reaction-diffusion-like formalism for plastic neural networks reveals dissipative solitons at criticality

    Science.gov (United States)

    Grytskyy, Dmytro; Diesmann, Markus; Helias, Moritz

    2016-06-01

    Self-organized structures in networks with spike-timing dependent synaptic plasticity (STDP) are likely to play a central role for information processing in the brain. In the present study we derive a reaction-diffusion-like formalism for plastic feed-forward networks of nonlinear rate-based model neurons with a correlation sensitive learning rule inspired by and being qualitatively similar to STDP. After obtaining equations that describe the change of the spatial shape of the signal from layer to layer, we derive a criterion for the nonlinearity necessary to obtain stable dynamics for arbitrary input. We classify the possible scenarios of signal evolution and find that close to the transition to the unstable regime metastable solutions appear. The form of these dissipative solitons is determined analytically and the evolution and interaction of several such coexistent objects is investigated.

  2. Exploring the spatio-temporal neural basis of face learning

    Science.gov (United States)

    Yang, Ying; Xu, Yang; Jew, Carol A.; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.

    2017-01-01

    Humans are experts at face individuation. Although previous work has identified a network of face-sensitive regions and some of the temporal signatures of face processing, as yet, we do not have a clear understanding of how such face-sensitive regions support learning at different time points. To study the joint spatio-temporal neural basis of face learning, we trained subjects to categorize two groups of novel faces and recorded their neural responses using magnetoencephalography (MEG) throughout learning. A regression analysis of neural responses in face-sensitive regions against behavioral learning curves revealed significant correlations with learning in the majority of the face-sensitive regions in the face network, mostly between 150–250 ms, but also after 300 ms. However, the effect was smaller in nonventral regions (within the superior temporal areas and prefrontal cortex) than that in the ventral regions (within the inferior occipital gyri (IOG), midfusiform gyri (mFUS) and anterior temporal lobes). A multivariate discriminant analysis also revealed that IOG and mFUS, which showed strong correlation effects with learning, exhibited significant discriminability between the two face categories at different time points both between 150–250 ms and after 300 ms. In contrast, the nonventral face-sensitive regions, where correlation effects with learning were smaller, did exhibit some significant discriminability, but mainly after 300 ms. In sum, our findings indicate that early and recurring temporal components arising from ventral face-sensitive regions are critically involved in learning new faces. PMID:28570739

  3. Where's the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network.

    Directory of Open Access Journals (Sweden)

    Christoph Hartmann

    2015-12-01

    Full Text Available Even in the absence of sensory stimulation the brain is spontaneously active. This background "noise" seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN, which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural

  4. Deep learning classification in asteroseismology using an improved neural network

    DEFF Research Database (Denmark)

    Hon, Marc; Stello, Dennis; Yu, Jie

    2018-01-01

    Deep learning in the form of 1D convolutional neural networks have previously been shown to be capable of efficiently classifying the evolutionary state of oscillating red giants into red giant branch stars and helium-core burning stars by recognizing visual features in their asteroseismic...... frequency spectra. We elaborate further on the deep learning method by developing an improved convolutional neural network classifier. To make our method useful for current and future space missions such as K2, TESS, and PLATO, we train classifiers that are able to classify the evolutionary states of lower...

  5. The Neural Cell Adhesion Molecule-Derived Peptide FGL Facilitates Long-Term Plasticity in the Dentate Gyrus in Vivo

    Science.gov (United States)

    Dallerac, Glenn; Zerwas, Meike; Novikova, Tatiana; Callu, Delphine; Leblanc-Veyrac, Pascale; Bock, Elisabeth; Berezin, Vladimir; Rampon, Claire; Doyere, Valerie

    2011-01-01

    The neural cell adhesion molecule (NCAM) is known to play a role in developmental and structural processes but also in synaptic plasticity and memory of the adult animal. Recently, FGL, a NCAM mimetic peptide that binds to the Fibroblast Growth Factor Receptor 1 (FGFR-1), has been shown to have a beneficial impact on normal memory functioning, as…

  6. Oscillations, Timing, Plasticity, and Learning in the Cerebellum.

    Science.gov (United States)

    Cheron, G; Márquez-Ruiz, J; Dan, B

    2016-04-01

    The highly stereotyped, crystal-like architecture of the cerebellum has long served as a basis for hypotheses with regard to the function(s) that it subserves. Historically, most clinical observations and experimental work have focused on the involvement of the cerebellum in motor control, with particular emphasis on coordination and learning. Two main models have been suggested to account for cerebellar functioning. According to Llinás's theory, the cerebellum acts as a control machine that uses the rhythmic activity of the inferior olive to synchronize Purkinje cell populations for fine-tuning of coordination. In contrast, the Ito-Marr-Albus theory views the cerebellum as a motor learning machine that heuristically refines synaptic weights of the Purkinje cell based on error signals coming from the inferior olive. Here, we review the role of timing of neuronal events, oscillatory behavior, and synaptic and non-synaptic influences in functional plasticity that can be recorded in awake animals in various physiological and pathological models in a perspective that also includes non-motor aspects of cerebellar function. We discuss organizational levels from genes through intracellular signaling, synaptic network to system and behavior, as well as processes from signal production and processing to memory, delegation, and actual learning. We suggest an integrative concept for control and learning based on articulated oscillation templates.

  7. Music mnemonics aid Verbal Memory and Induce Learning – Related Brain Plasticity in Multiple Sclerosis

    Science.gov (United States)

    Thaut, Michael H.; Peterson, David A.; McIntosh, Gerald C.; Hoemberg, Volker

    2014-01-01

    Recent research on music and brain function has suggested that the temporal pattern structure in music and rhythm can enhance cognitive functions. To further elucidate this question specifically for memory, we investigated if a musical template can enhance verbal learning in patients with multiple sclerosis (MS) and if music-assisted learning will also influence short-term, system-level brain plasticity. We measured systems-level brain activity with oscillatory network synchronization during music-assisted learning. Specifically, we measured the spectral power of 128-channel electroencephalogram (EEG) in alpha and beta frequency bands in 54 patients with MS. The study sample was randomly divided into two groups, either hearing a spoken or a musical (sung) presentation of Rey’s auditory verbal learning test. We defined the “learning-related synchronization” (LRS) as the percent change in EEG spectral power from the first time the word was presented to the average of the subsequent word encoding trials. LRS differed significantly between the music and the spoken conditions in low alpha and upper beta bands. Patients in the music condition showed overall better word memory and better word order memory and stronger bilateral frontal alpha LRS than patients in the spoken condition. The evidence suggests that a musical mnemonic recruits stronger oscillatory network synchronization in prefrontal areas in MS patients during word learning. It is suggested that the temporal structure implicit in musical stimuli enhances “deep encoding” during verbal learning and sharpens the timing of neural dynamics in brain networks degraded by demyelination in MS. PMID:24982626

  8. Dynamic Hebbian Cross-Correlation Learning Resolves the Spike Timing Dependent Plasticity Conundrum

    Directory of Open Access Journals (Sweden)

    Tjeerd V. olde Scheper

    2018-01-01

    Full Text Available Spike Timing-Dependent Plasticity has been found to assume many different forms. The classic STDP curve, with one potentiating and one depressing window, is only one of many possible curves that describe synaptic learning using the STDP mechanism. It has been shown experimentally that STDP curves may contain multiple LTP and LTD windows of variable width, and even inverted windows. The underlying STDP mechanism that is capable of producing such an extensive, and apparently incompatible, range of learning curves is still under investigation. In this paper, it is shown that STDP originates from a combination of two dynamic Hebbian cross-correlations of local activity at the synapse. The correlation of the presynaptic activity with the local postsynaptic activity is a robust and reliable indicator of the discrepancy between the presynaptic neuron and the postsynaptic neuron's activity. The second correlation is between the local postsynaptic activity with dendritic activity which is a good indicator of matching local synaptic and dendritic activity. We show that this simple time-independent learning rule can give rise to many forms of the STDP learning curve. The rule regulates synaptic strength without the need for spike matching or other supervisory learning mechanisms. Local differences in dendritic activity at the synapse greatly affect the cross-correlation difference which determines the relative contributions of different neural activity sources. Dendritic activity due to nearby synapses, action potentials, both forward and back-propagating, as well as inhibitory synapses will dynamically modify the local activity at the synapse, and the resulting STDP learning rule. The dynamic Hebbian learning rule ensures furthermore, that the resulting synaptic strength is dynamically stable, and that interactions between synapses do not result in local instabilities. The rule clearly demonstrates that synapses function as independent localized

  9. Deep learning with convolutional neural network in radiology.

    Science.gov (United States)

    Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu

    2018-04-01

    Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.

  10. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Zenke, Friedemann; Ganguli, Surya

    2018-04-13

    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

  11. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  12. Neural plasticity expressed in central auditory structures with and without tinnitus

    Directory of Open Access Journals (Sweden)

    Larry E Roberts

    2012-05-01

    Full Text Available Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To address this question, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by EEG are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR and P2 transient response known to localize to primary and nonprimary auditory cortex, respectively. P2 amplitude increased with training equally in participants with tinnitus and in control subjects, suggesting normal remodeling of nonprimary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls ASSR phase advanced toward the stimulus waveform by about ten degrees over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not nonprimary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected.

  13. Neural-Fitted TD-Leaf Learning for Playing Othello With Structured Neural Networks

    NARCIS (Netherlands)

    van den Dries, Sjoerd; Wiering, Marco A.

    This paper describes a methodology for quickly learning to play games at a strong level. The methodology consists of a novel combination of three techniques, and a variety of experiments on the game of Othello demonstrates their usefulness. First, structures or topologies in neural network

  14. Learning and Generalisation in Neural Networks with Local Preprocessing

    OpenAIRE

    Kutsia, Merab

    2007-01-01

    We study learning and generalisation ability of a specific two-layer feed-forward neural network and compare its properties to that of a simple perceptron. The input patterns are mapped nonlinearly onto a hidden layer, much larger than the input layer, and this mapping is either fixed or may result from an unsupervised learning process. Such preprocessing of initially uncorrelated random patterns results in the correlated patterns in the hidden layer. The hidden-to-output mapping of the net...

  15. Learning and forgetting on asymmetric, diluted neural networks

    International Nuclear Information System (INIS)

    Derrida, B.; Nadal, J.P.

    1987-01-01

    It is possible to construct diluted asymmetric models of neural networks for which the dynamics can be calculated exactly. The authors test several learning schemes, in particular, models for which the values of the synapses remain bounded and depend on the history. Our analytical results on the relative efficiencies of the various learning schemes are qualitatively similar to the corresponding ones obtained numerically on fully connected symmetric networks

  16. Genetic learning in rule-based and neural systems

    Science.gov (United States)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  17. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    Science.gov (United States)

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  18. E-I balance emerges naturally from continuous Hebbian learning in autonomous neural networks.

    Science.gov (United States)

    Trapp, Philip; Echeveste, Rodrigo; Gros, Claudius

    2018-06-12

    Spontaneous brain activity is characterized in part by a balanced asynchronous chaotic state. Cortical recordings show that excitatory (E) and inhibitory (I) drivings in the E-I balanced state are substantially larger than the overall input. We show that such a state arises naturally in fully adapting networks which are deterministic, autonomously active and not subject to stochastic external or internal drivings. Temporary imbalances between excitatory and inhibitory inputs lead to large but short-lived activity bursts that stabilize irregular dynamics. We simulate autonomous networks of rate-encoding neurons for which all synaptic weights are plastic and subject to a Hebbian plasticity rule, the flux rule, that can be derived from the stationarity principle of statistical learning. Moreover, the average firing rate is regulated individually via a standard homeostatic adaption of the bias of each neuron's input-output non-linear function. Additionally, networks with and without short-term plasticity are considered. E-I balance may arise only when the mean excitatory and inhibitory weights are themselves balanced, modulo the overall activity level. We show that synaptic weight balance, which has been considered hitherto as given, naturally arises in autonomous neural networks when the here considered self-limiting Hebbian synaptic plasticity rule is continuously active.

  19. Thermodynamic efficiency of learning a rule in neural networks

    Science.gov (United States)

    Goldt, Sebastian; Seifert, Udo

    2017-11-01

    Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.

  20. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    Science.gov (United States)

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  1. Perceptron learning rule derived from spike-frequency adaptation and spike-time-dependent plasticity.

    Science.gov (United States)

    D'Souza, Prashanth; Liu, Shih-Chii; Hahnloser, Richard H R

    2010-03-09

    It is widely believed that sensory and motor processing in the brain is based on simple computational primitives rooted in cellular and synaptic physiology. However, many gaps remain in our understanding of the connections between neural computations and biophysical properties of neurons. Here, we show that synaptic spike-time-dependent plasticity (STDP) combined with spike-frequency adaptation (SFA) in a single neuron together approximate the well-known perceptron learning rule. Our calculations and integrate-and-fire simulations reveal that delayed inputs to a neuron endowed with STDP and SFA precisely instruct neural responses to earlier arriving inputs. We demonstrate this mechanism on a developmental example of auditory map formation guided by visual inputs, as observed in the external nucleus of the inferior colliculus (ICX) of barn owls. The interplay of SFA and STDP in model ICX neurons precisely transfers the tuning curve from the visual modality onto the auditory modality, demonstrating a useful computation for multimodal and sensory-guided processing.

  2. Unsupervised Learning of Digit Recognition Using Spike-Timing-Dependent Plasticity

    Directory of Open Access Journals (Sweden)

    Peter U. Diehl

    2015-08-01

    Full Text Available In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns, since most of such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e. conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks.

  3. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  4. Is there a digital generation gap for e-learning in plastic surgery?

    Science.gov (United States)

    Stevens, Roger J G; Hamilton, Neil M

    2012-01-01

    Some authors have claimed that those plastic surgeons born between 1965 and 1979 (generation X, or Gen-X) are more technologically able than those born between 1946 and 1964 (Baby Boomers, or BB). Those born after 1980, which comprise generation Y (Gen-Y), might be the most technologically able and most demanding for electronic learning (e-learning) to support their education and training in plastic surgery. These differences might represent a "digital generation gap" and would have practical and financial implications for the development of e-learning. The aim of this study was to survey plastic surgeons on their experience and preferences in e-learning in plastic surgery and to establish whether there was a difference between different generations. Online survey (e-survey) of plastic surgeons within the UK and Ireland was used for this study. In all, 624 plastic surgeons were invited by e-mail to complete an e-survey anonymously for their experience of e-learning in plastic surgery, whether they would like access to e-learning and, if so, whether this should this be provided nationally, locally, or not at all. By stratifying plastic surgeons into three generations (BB, Gen-X, and Gen-Y), the responses between generations were compared using the χ(2)-test for linear trend. A p value learning. These findings refute the claim that there are differences in the experience of e-learning of plastic surgeons by generation. Furthermore, there is no evidence that there are differences in whether there should be access to e-learning and how e-learning should be provided for different generations of plastic surgeons. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  5. Reversal of long-term potentiation-like plasticity processes after motor learning disrupts skill retention.

    Science.gov (United States)

    Cantarero, Gabriela; Lloyd, Ashley; Celnik, Pablo

    2013-07-31

    Plasticity of synaptic connections in the primary motor cortex (M1) is thought to play an essential role in learning and memory. Human and animal studies have shown that motor learning results in long-term potentiation (LTP)-like plasticity processes, namely potentiation of M1 and a temporary occlusion of additional LTP-like plasticity. Moreover, biochemical processes essential for LTP are also crucial for certain types of motor learning and memory. Thus, it has been speculated that the occlusion of LTP-like plasticity after learning, indicative of how much LTP was used to learn, is essential for retention. Here we provide supporting evidence of it in humans. Induction of LTP-like plasticity can be abolished using a depotentiation protocol (DePo) consisting of brief continuous theta burst stimulation. We used transcranial magnetic stimulation to assess whether application of DePo over M1 after motor learning affected (1) occlusion of LTP-like plasticity and (2) retention of motor skill learning. We found that the magnitude of motor memory retention is proportional to the magnitude of occlusion of LTP-like plasticity. Moreover, DePo stimulation over M1, but not over a control site, reversed the occlusion of LTP-like plasticity induced by motor learning and disrupted skill retention relative to control subjects. Altogether, these results provide evidence of a link between occlusion of LTP-like plasticity and retention and that this measure could be used as a biomarker to predict retention. Importantly, attempts to reverse the occlusion of LTP-like plasticity after motor learning comes with the cost of reducing retention of motor learning.

  6. Finite time convergent learning law for continuous neural networks.

    Science.gov (United States)

    Chairez, Isaac

    2014-02-01

    This paper addresses the design of a discontinuous finite time convergent learning law for neural networks with continuous dynamics. The neural network was used here to obtain a non-parametric model for uncertain systems described by a set of ordinary differential equations. The source of uncertainties was the presence of some external perturbations and poor knowledge of the nonlinear function describing the system dynamics. A new adaptive algorithm based on discontinuous algorithms was used to adjust the weights of the neural network. The adaptive algorithm was derived by means of a non-standard Lyapunov function that is lower semi-continuous and differentiable in almost the whole space. A compensator term was included in the identifier to reject some specific perturbations using a nonlinear robust algorithm. Two numerical examples demonstrated the improvements achieved by the learning algorithm introduced in this paper compared to classical schemes with continuous learning methods. The first one dealt with a benchmark problem used in the paper to explain how the discontinuous learning law works. The second one used the methane production model to show the benefits in engineering applications of the learning law proposed in this paper. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Learning-induced pattern classification in a chaotic neural network

    International Nuclear Information System (INIS)

    Li, Yang; Zhu, Ping; Xie, Xiaoping; He, Guoguang; Aihara, Kazuyuki

    2012-01-01

    In this Letter, we propose a Hebbian learning rule with passive forgetting (HLRPF) for use in a chaotic neural network (CNN). We then define the indices based on the Euclidean distance to investigate the evolution of the weights in a simplified way. Numerical simulations demonstrate that, under suitable external stimulations, the CNN with the proposed HLRPF acts as a fuzzy-like pattern classifier that performs much better than an ordinary CNN. The results imply relationship between learning and recognition. -- Highlights: ► Proposing a Hebbian learning rule with passive forgetting (HLRPF). ► Defining indices to investigate the evolution of the weights simply. ► The chaotic neural network with HLRPF acts as a fuzzy-like pattern classifier. ► The pattern classifier ability of the network is improved much.

  8. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    Science.gov (United States)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  9. Biologically-inspired Learning in Pulsed Neural Networks

    DEFF Research Database (Denmark)

    Lehmann, Torsten; Woodburn, Robin

    1999-01-01

    Self-learning chips to implement many popular ANN (artificial neural network) algorithms are very difficult to design. We explain why this is so and say what lessons previous work teaches us in the design of self-learning systems. We offer a contribution to the `biologically-inspired' approach......, explaining what we mean by this term and providing an example of a robust, self-learning design that can solve simple classical-conditioning tasks. We give details of the design of individual circuits to perform component functions, which can then be combined into a network to solve the task. We argue...

  10. Epigenetic learning in non-neural organisms

    Indian Academy of Sciences (India)

    Prakash

    2008-09-19

    Sep 19, 2008 ... neurobiology and psychology directly implies latency and learning. However ... The notion of cell memory is important in studies of cell biology and .... Paramecium following induction of new phenotypes by various physical ...

  11. The neural cell adhesion molecule-derived peptide FGL facilitates long-term plasticity in the dentate gyrus in vivo

    DEFF Research Database (Denmark)

    Dallérac, Glenn; Zerwas, Meike; Novikova, Tatiana

    2011-01-01

    The neural cell adhesion molecule (NCAM) is known to play a role in developmental and structural processes but also in synaptic plasticity and memory of the adult animal. Recently, FGL, a NCAM mimetic peptide that binds to the Fibroblast Growth Factor Receptor 1 (FGFR-1), has been shown to have...... a beneficial impact on normal memory functioning, as well as to rescue some pathological cognitive impairments. Whether its facilitating impact may be mediated through promoting neuronal plasticity is not known. The present study was therefore designed to test whether FGL modulates the induction...

  12. Neural Correlates of Morphology Acquisition through a Statistical Learning Paradigm.

    Science.gov (United States)

    Sandoval, Michelle; Patterson, Dianne; Dai, Huanping; Vance, Christopher J; Plante, Elena

    2017-01-01

    The neural basis of statistical learning as it occurs over time was explored with stimuli drawn from a natural language (Russian nouns). The input reflected the "rules" for marking categories of gendered nouns, without making participants explicitly aware of the nature of what they were to learn. Participants were scanned while listening to a series of gender-marked nouns during four sequential scans, and were tested for their learning immediately after each scan. Although participants were not told the nature of the learning task, they exhibited learning after their initial exposure to the stimuli. Independent component analysis of the brain data revealed five task-related sub-networks. Unlike prior statistical learning studies of word segmentation, this morphological learning task robustly activated the inferior frontal gyrus during the learning period. This region was represented in multiple independent components, suggesting it functions as a network hub for this type of learning. Moreover, the results suggest that subnetworks activated by statistical learning are driven by the nature of the input, rather than reflecting a general statistical learning system.

  13. Pannexin1 stabilizes synaptic plasticity and is needed for learning.

    Directory of Open Access Journals (Sweden)

    Nora Prochnow

    Full Text Available Pannexin 1 (Panx1 represents a class of vertebrate membrane channels, bearing significant sequence homology with the invertebrate gap junction proteins, the innexins and more distant similarities in the membrane topologies and pharmacological sensitivities with gap junction proteins of the connexin family. In the nervous system, cooperation among pannexin channels, adenosine receptors, and K(ATP channels modulating neuronal excitability via ATP and adenosine has been recognized, but little is known about the significance in vivo. However, the localization of Panx1 at postsynaptic sites in hippocampal neurons and astrocytes in close proximity together with the fundamental role of ATP and adenosine for CNS metabolism and cell signaling underscore the potential relevance of this channel to synaptic plasticity and higher brain functions. Here, we report increased excitability and potently enhanced early and persistent LTP responses in the CA1 region of acute slice preparations from adult Panx1(-/- mice. Adenosine application and N-methyl-D-aspartate receptor (NMDAR-blocking normalized this phenotype, suggesting that absence of Panx1 causes chronic extracellular ATP/adenosine depletion, thus facilitating postsynaptic NMDAR activation. Compensatory transcriptional up-regulation of metabotropic glutamate receptor 4 (grm4 accompanies these adaptive changes. The physiological modification, promoted by loss of Panx1, led to distinct behavioral alterations, enhancing anxiety and impairing object recognition and spatial learning in Panx1(-/- mice. We conclude that ATP release through Panx1 channels plays a critical role in maintaining synaptic strength and plasticity in CA1 neurons of the adult hippocampus. This result provides the rationale for in-depth analysis of Panx1 function and adenosine based therapies in CNS disorders.

  14. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Directory of Open Access Journals (Sweden)

    Yoonsik Shim

    2016-10-01

    Full Text Available We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP. The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  15. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Science.gov (United States)

    Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil

    2016-10-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  16. Differential theory of learning for efficient neural network pattern recognition

    Science.gov (United States)

    Hampshire, John B., II; Vijaya Kumar, Bhagavatula

    1993-09-01

    We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generate well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.

  17. Executive functions in mild cognitive impairment: emergence and breakdown of neural plasticity.

    Science.gov (United States)

    Clément, Francis; Gauthier, Serge; Belleville, Sylvie

    2013-05-01

    Our goal was to test the effect of disease severity on the brain activation associated with two executive processes: manipulation and divided attention. This was achieved by administrating a manipulation task and a divided attention task using functional magnetic resonance imaging to 24 individuals with mild cognitive impairment (MCI) and 14 healthy controls matched for age, sex and education. The Mattis Dementia Rating Scale was used to divide persons with MCI into those with better and worse cognitive performances. Both tasks were associated with more brain activation in the MCI group with higher cognition than in healthy controls, particularly in the left frontal areas. Correlational analyses indicated that greater activation in a frontostriatal network hyperactivated by the higher-cognition group was related with better task performance, suggesting that these activations may support functional reorganization of a compensatory nature. By contrast, the lower-cognition group failed to show greater cerebral hyperactivation than controls during the divided attention task and, during the manipulation task, and showed less brain activation than controls in the left ventrolateral cortex, a region commonly hypoactivated in patients with Alzheimer's disease. These findings indicate that, during the early phase of MCI, executive functioning benefits from neural reorganization, but that a breakdown of this brain plasticity characterizes the late stages of MCI. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Media Multitasking and Cognitive, Psychological, Neural, and Learning Differences.

    Science.gov (United States)

    Uncapher, Melina R; Lin, Lin; Rosen, Larry D; Kirkorian, Heather L; Baron, Naomi S; Bailey, Kira; Cantor, Joanne; Strayer, David L; Parsons, Thomas D; Wagner, Anthony D

    2017-11-01

    American youth spend more time with media than any other waking activity: an average of 7.5 hours per day, every day. On average, 29% of that time is spent juggling multiple media streams simultaneously (ie, media multitasking). This phenomenon is not limited to American youth but is paralleled across the globe. Given that a large number of media multitaskers (MMTs) are children and young adults whose brains are still developing, there is great urgency to understand the neurocognitive profiles of MMTs. It is critical to understand the relation between the relevant cognitive domains and underlying neural structure and function. Of equal importance is understanding the types of information processing that are necessary in 21st century learning environments. The present review surveys the growing body of evidence demonstrating that heavy MMTs show differences in cognition (eg, poorer memory), psychosocial behavior (eg, increased impulsivity), and neural structure (eg, reduced volume in anterior cingulate cortex). Furthermore, research indicates that multitasking with media during learning (in class or at home) can negatively affect academic outcomes. Until the direction of causality is understood (whether media multitasking causes such behavioral and neural differences or whether individuals with such differences tend to multitask with media more often), the data suggest that engagement with concurrent media streams should be thoughtfully considered. Findings from such research promise to inform policy and practice on an increasingly urgent societal issue while significantly advancing our understanding of the intersections between cognitive, psychosocial, neural, and academic factors. Copyright © 2017 by the American Academy of Pediatrics.

  19. Bio-Inspired Neural Model for Learning Dynamic Models

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  20. Modeling gravity-dependent plasticity of the angular vestibuloocular reflex with a physiologically based neural network.

    Science.gov (United States)

    Xiang, Yongqing; Yakushin, Sergei B; Cohen, Bernard; Raphan, Theodore

    2006-12-01

    A neural network model was developed to explain the gravity-dependent properties of gain adaptation of the angular vestibuloocular reflex (aVOR). Gain changes are maximal at the head orientation where the gain is adapted and decrease as the head is tilted away from that position and can be described by the sum of gravity-independent and gravity-dependent components. The adaptation process was modeled by modifying the weights and bias values of a three-dimensional physiologically based neural network of canal-otolith-convergent neurons that drive the aVOR. Model parameters were trained using experimental vertical aVOR gain values. The learning rule aimed to reduce the error between eye velocities obtained from experimental gain values and model output in the position of adaptation. Although the model was trained only at specific head positions, the model predicted the experimental data at all head positions in three dimensions. Altering the relative learning rates of the weights and bias improved the model-data fits. Model predictions in three dimensions compared favorably with those of a double-sinusoid function, which is a fit that minimized the mean square error at every head position and served as the standard by which we compared the model predictions. The model supports the hypothesis that gravity-dependent adaptation of the aVOR is realized in three dimensions by a direct otolith input to canal-otolith neurons, whose canal sensitivities are adapted by the visual-vestibular mismatch. The adaptation is tuned by how the weights from otolith input to the canal-otolith-convergent neurons are adapted for a given head orientation.

  1. Neural correlates of learning to attend

    Directory of Open Access Journals (Sweden)

    Todd A Kelley

    2010-11-01

    Full Text Available Recent work has shown that training can improve attentional focus. Little is known, however, about how training in attention and multitasking affects the brain. We used functional magnetic resonance imaging (fMRI to measure changes in cortical responses to distracting stimuli during training on a visual categorization task. Training led to a reduction in behavioural distraction effects, and these improvements in performance generalized to untrained conditions. Although large regions of early visual and posterior parietal cortices responded to the presence of distractors, these regions did not exhibit significant changes in their response following training. In contrast, middle frontal gyrus did exhibit decreased distractor-related responses with practice, showing the same trend as behaviour for previously observed distractor locations. However, the neural response in this region diverged from behaviour for novel distractor locations, showing greater activity. We conclude that training did not change the robustness of the initial sensory response, but led to increased efficiency in late-stage filtering in the trained conditions.

  2. Self-teaching neural network learns difficult reactor control problem

    International Nuclear Information System (INIS)

    Jouse, W.C.

    1989-01-01

    A self-teaching neural network used as an adaptive controller quickly learns to control an unstable reactor configuration. The network models the behavior of a human operator. It is trained by allowing it to operate the reactivity control impulsively. It is punished whenever either the power or fuel temperature stray outside technical limits. Using a simple paradigm, the network constructs an internal representation of the punishment and of the reactor system. The reactor is constrained to small power orbits

  3. Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

    OpenAIRE

    Shen, Li; Lin, Zhouchen; Huang, Qingming

    2015-01-01

    Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015...

  4. Neural network representation and learning of mappings and their derivatives

    Science.gov (United States)

    White, Halbert; Hornik, Kurt; Stinchcombe, Maxwell; Gallant, A. Ronald

    1991-01-01

    Discussed here are recent theorems proving that artificial neural networks are capable of approximating an arbitrary mapping and its derivatives as accurately as desired. This fact forms the basis for further results establishing the learnability of the desired approximations, using results from non-parametric statistics. These results have potential applications in robotics, chaotic dynamics, control, and sensitivity analysis. An example involving learning the transfer function and its derivatives for a chaotic map is discussed.

  5. Self-learning Monte Carlo with deep neural networks

    Science.gov (United States)

    Shen, Huitao; Liu, Junwei; Fu, Liang

    2018-05-01

    The self-learning Monte Carlo (SLMC) method is a general algorithm to speedup MC simulations. Its efficiency has been demonstrated in various systems by introducing an effective model to propose global moves in the configuration space. In this paper, we show that deep neural networks can be naturally incorporated into SLMC, and without any prior knowledge can learn the original model accurately and efficiently. Demonstrated in quantum impurity models, we reduce the complexity for a local update from O (β2) in Hirsch-Fye algorithm to O (β lnβ ) , which is a significant speedup especially for systems at low temperatures.

  6. Asymmetric Variate Generation via a Parameterless Dual Neural Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Simone Fiori

    2008-01-01

    Full Text Available In a previous work (S. Fiori, 2006, we proposed a random number generator based on a tunable non-linear neural system, whose learning rule is designed on the basis of a cardinal equation from statistics and whose implementation is based on look-up tables (LUTs. The aim of the present manuscript is to improve the above-mentioned random number generation method by changing the learning principle, while retaining the efficient LUT-based implementation. The new method proposed here proves easier to implement and relaxes some previous limitations.

  7. Learning speaker-specific characteristics with a deep neural architecture.

    Science.gov (United States)

    Chen, Ke; Salman, Ahmad

    2011-11-01

    Speech signals convey various yet mixed information ranging from linguistic to speaker-specific information. However, most of acoustic representations characterize all different kinds of information as whole, which could hinder either a speech or a speaker recognition (SR) system from producing a better performance. In this paper, we propose a novel deep neural architecture (DNA) especially for learning speaker-specific characteristics from mel-frequency cepstral coefficients, an acoustic representation commonly used in both speech recognition and SR, which results in a speaker-specific overcomplete representation. In order to learn intrinsic speaker-specific characteristics, we come up with an objective function consisting of contrastive losses in terms of speaker similarity/dissimilarity and data reconstruction losses used as regularization to normalize the interference of non-speaker-related information. Moreover, we employ a hybrid learning strategy for learning parameters of the deep neural networks: i.e., local yet greedy layerwise unsupervised pretraining for initialization and global supervised learning for the ultimate discriminative goal. With four Linguistic Data Consortium (LDC) benchmarks and two non-English corpora, we demonstrate that our overcomplete representation is robust in characterizing various speakers, no matter whether their utterances have been used in training our DNA, and highly insensitive to text and languages spoken. Extensive comparative studies suggest that our approach yields favorite results in speaker verification and segmentation. Finally, we discuss several issues concerning our proposed approach.

  8. Neural Basis of Reinforcement Learning and Decision Making

    Science.gov (United States)

    Lee, Daeyeol; Seo, Hyojung; Jung, Min Whan

    2012-01-01

    Reinforcement learning is an adaptive process in which an animal utilizes its previous experience to improve the outcomes of future choices. Computational theories of reinforcement learning play a central role in the newly emerging areas of neuroeconomics and decision neuroscience. In this framework, actions are chosen according to their value functions, which describe how much future reward is expected from each action. Value functions can be adjusted not only through reward and penalty, but also by the animal’s knowledge of its current environment. Studies have revealed that a large proportion of the brain is involved in representing and updating value functions and using them to choose an action. However, how the nature of a behavioral task affects the neural mechanisms of reinforcement learning remains incompletely understood. Future studies should uncover the principles by which different computational elements of reinforcement learning are dynamically coordinated across the entire brain. PMID:22462543

  9. Outsmarting neural networks: an alternative paradigm for machine learning

    Energy Technology Data Exchange (ETDEWEB)

    Protopopescu, V.; Rao, N.S.V.

    1996-10-01

    We address three problems in machine learning, namely: (i) function learning, (ii) regression estimation, and (iii) sensor fusion, in the Probably and Approximately Correct (PAC) framework. We show that, under certain conditions, one can reduce the three problems above to the regression estimation. The latter is usually tackled with artificial neural networks (ANNs) that satisfy the PAC criteria, but have high computational complexity. We propose several computationally efficient PAC alternatives to ANNs to solve the regression estimation. Thereby we also provide efficient PAC solutions to the function learning and sensor fusion problems. The approach is based on cross-fertilizing concepts and methods from statistical estimation, nonlinear algorithms, and the theory of computational complexity, and is designed as part of a new, coherent paradigm for machine learning.

  10. Stochastic sensitivity analysis and Langevin simulation for neural network learning

    International Nuclear Information System (INIS)

    Koda, Masato

    1997-01-01

    A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method

  11. Image Classification, Deep Learning and Convolutional Neural Networks : A Comparative Study of Machine Learning Frameworks

    OpenAIRE

    Airola, Rasmus; Hager, Kristoffer

    2017-01-01

    The use of machine learning and specifically neural networks is a growing trend in software development, and has grown immensely in the last couple of years in the light of an increasing need to handle big data and large information flows. Machine learning has a broad area of application, such as human-computer interaction, predicting stock prices, real-time translation, and self driving vehicles. Large companies such as Microsoft and Google have already implemented machine learning in some o...

  12. Functional consequences of experience-dependent plasticity on tactile perception following perceptual learning.

    Science.gov (United States)

    Trzcinski, Natalie K; Gomez-Ramirez, Manuel; Hsiao, Steven S

    2016-09-01

    Continuous training enhances perceptual discrimination and promotes neural changes in areas encoding the experienced stimuli. This type of experience-dependent plasticity has been demonstrated in several sensory and motor systems. Particularly, non-human primates trained to detect consecutive tactile bar indentations across multiple digits showed expanded excitatory receptive fields (RFs) in somatosensory cortex. However, the perceptual implications of these anatomical changes remain undetermined. Here, we trained human participants for 9 days on a tactile task that promoted expansion of multi-digit RFs. Participants were required to detect consecutive indentations of bar stimuli spanning multiple digits. Throughout the training regime we tracked participants' discrimination thresholds on spatial (grating orientation) and temporal tasks on the trained and untrained hands in separate sessions. We hypothesized that training on the multi-digit task would decrease perceptual thresholds on tasks that require stimulus processing across multiple digits, while also increasing thresholds on tasks requiring discrimination on single digits. We observed an increase in orientation thresholds on a single digit. Importantly, this effect was selective for the stimulus orientation and hand used during multi-digit training. We also found that temporal acuity between digits improved across trained digits, suggesting that discriminating the temporal order of multi-digit stimuli can transfer to temporal discrimination of other tactile stimuli. These results suggest that experience-dependent plasticity following perceptual learning improves and interferes with tactile abilities in manners predictive of the task and stimulus features used during training. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Transfer functions for protein signal transduction: application to a model of striatal neural plasticity.

    Directory of Open Access Journals (Sweden)

    Gabriele Scheler

    Full Text Available We present a novel formulation for biochemical reaction networks in the context of protein signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of "source" species, which are interpreted as input signals. Signals are transmitted to all other species in the system (the "target" species with a specific delay and with a specific transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and even recalled to build dynamical models on the basis of state changes. By separating the temporal and the magnitudinal domain we can greatly simplify the computational model, circumventing typical problems of complex dynamical systems. The transfer function transformation of biochemical reaction systems can be applied to mass-action kinetic models of signal transduction. The paper shows that this approach yields significant novel insights while remaining a fully testable and executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modularizations that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. Remarkably, we found that overall interconnectedness depends on the magnitude of inputs, with higher connectivity at low input concentrations and significant modularization at moderate to high input concentrations. This general result, which directly follows from the properties of

  14. Association of contextual cues with morphine reward increases neural and synaptic plasticity in the ventral hippocampus of rats.

    Science.gov (United States)

    Alvandi, Mina Sadighi; Bourmpoula, Maria; Homberg, Judith R; Fathollahi, Yaghoub

    2017-11-01

    Drug addiction is associated with aberrant memory and permanent functional changes in neural circuits. It is known that exposure to drugs like morphine is associated with positive emotional states and reward-related memory. However, the underlying mechanisms in terms of neural plasticity in the ventral hippocampus, a region involved in associative memory and emotional behaviors, are not fully understood. Therefore, we measured adult neurogenesis, dendritic spine density and brain-derived neurotrophic factor (BDNF) and TrkB mRNA expression as parameters for synaptic plasticity in the ventral hippocampus. Male Sprague Dawley rats were subjected to the CPP (conditioned place preference) paradigm and received 10 mg/kg morphine. Half of the rats were used to evaluate neurogenesis by immunohistochemical markers Ki67 and doublecortin (DCX). The other half was used for Golgi staining to measure spine density and real-time quantitative reverse transcription-polymerase chain reaction to assess BDNF/TrkB expression levels. We found that morphine-treated rats exhibited more place conditioning as compared with saline-treated rats and animals that were exposed to the CPP without any injections. Locomotor activity did not change significantly. Morphine-induced CPP significantly increased the number of Ki67 and DCX-labeled cells in the ventral dentate gyrus. Additionally, we found increased dendritic spine density in both CA1 and dentate gyrus and an enhancement of BDNF/TrkB mRNA levels in the whole ventral hippocampus. Ki67, DCX and spine density were significantly correlated with CPP scores. In conclusion, we show that morphine-induced reward-related memory is associated with neural and synaptic plasticity changes in the ventral hippocampus. Such neural changes could underlie context-induced drug relapse. © 2017 Society for the Study of Addiction.

  15. A saturation hypothesis to explain both enhanced and impaired learning with enhanced plasticity

    Science.gov (United States)

    Nguyen-Vu, TD Barbara; Zhao, Grace Q; Lahiri, Subhaneil; Kimpo, Rhea R; Lee, Hanmi; Ganguli, Surya; Shatz, Carla J; Raymond, Jennifer L

    2017-01-01

    Across many studies, animals with enhanced synaptic plasticity exhibit either enhanced or impaired learning, raising a conceptual puzzle: how enhanced plasticity can yield opposite learning outcomes? Here, we show that the recent history of experience can determine whether mice with enhanced plasticity exhibit enhanced or impaired learning in response to the same training. Mice with enhanced cerebellar LTD, due to double knockout (DKO) of MHCI H2-Kb/H2-Db (KbDb−/−), exhibited oculomotor learning deficits. However, the same mice exhibited enhanced learning after appropriate pre-training. Theoretical analysis revealed that synapses with history-dependent learning rules could recapitulate the data, and suggested that saturation may be a key factor limiting the ability of enhanced plasticity to enhance learning. Optogenetic stimulation designed to saturate LTD produced the same impairment in WT as observed in DKO mice. Overall, our results suggest that the recent history of activity and the threshold for synaptic plasticity conspire to effect divergent learning outcomes. DOI: http://dx.doi.org/10.7554/eLife.20147.001 PMID:28234229

  16. A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning.

    Science.gov (United States)

    Kappel, David; Legenstein, Robert; Habenschuss, Stefan; Hsieh, Michael; Maass, Wolfgang

    2018-01-01

    Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.

  17. Behavioral and neural plasticity caused by early social experiences: the case of the honeybee

    Directory of Open Access Journals (Sweden)

    Andrés eArenas

    2013-08-01

    Full Text Available Cognitive experiences during the early stages of life play an important role in shaping future behavior. Behavioral and neural long-term changes after early sensory and associative experiences have been recently reported in the honeybee. This invertebrate is an excellent model for assessing the role of precocious experiences on later behavior due to its extraordinarily tuned division of labor based on age polyethism. These studies are mainly focused on the role and importance of experiences occurred during the first days of the adult lifespan, their impact on foraging decisions and their contribution to coordinate food gathering. Odor-rewarded experiences during the first days of honeybee adulthood alter the responsiveness to sucrose, making young hive bees more sensitive to assess gustatory features about the nectar brought back to the hive and affecting the dynamic of the food transfers and the propagation of food-related information within the colony as well. Early olfactory experiences lead to stable and long-term associative memories that can be successfully recalled after many days, even at foraging ages. Also they improve memorizing of new associative learning events later in life. The establishment of early memories promotes stable reorganization of the olfactory circuits inducing structural and functional changes in the antennal lobe. Early rewarded experiences have relevant consequences at the social level too, biasing dance and trophallaxis partner choice and affecting recruitment. Here, we revised recent results in bees´ physiology, behavior and sociobiology to depict how the early experiences affect their cognition abilities and neural-related circuits.

  18. 2D co-ordinate transformation based on a spike timing-dependent plasticity learning mechanism.

    Science.gov (United States)

    Wu, QingXiang; McGinnity, Thomas Martin; Maguire, Liam; Belatreche, Ammar; Glackin, Brendan

    2008-11-01

    In order to plan accurate motor actions, the brain needs to build an integrated spatial representation associated with visual stimuli and haptic stimuli. Since visual stimuli are represented in retina-centered co-ordinates and haptic stimuli are represented in body-centered co-ordinates, co-ordinate transformations must occur between the retina-centered co-ordinates and body-centered co-ordinates. A spiking neural network (SNN) model, which is trained with spike-timing-dependent-plasticity (STDP), is proposed to perform a 2D co-ordinate transformation of the polar representation of an arm position to a Cartesian representation, to create a virtual image map of a haptic input. Through the visual pathway, a position signal corresponding to the haptic input is used to train the SNN with STDP synapses such that after learning the SNN can perform the co-ordinate transformation to generate a representation of the haptic input with the same co-ordinates as a visual image. The model can be applied to explain co-ordinate transformation in spiking neuron based systems. The principle can be used in artificial intelligent systems to process complex co-ordinate transformations represented by biological stimuli.

  19. Neural Correlates of Threat Perception: Neural Equivalence of Conspecific and Heterospecific Mobbing Calls Is Learned

    Science.gov (United States)

    Avey, Marc T.; Hoeschele, Marisa; Moscicki, Michele K.; Bloomfield, Laurie L.; Sturdy, Christopher B.

    2011-01-01

    Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise [1]–[2]. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators [3]. Mobbing calls produced in response to smaller, higher-threat predators contain more “D” notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators [4]. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned. PMID:21909363

  20. Neural correlates of threat perception: neural equivalence of conspecific and heterospecific mobbing calls is learned.

    Science.gov (United States)

    Avey, Marc T; Hoeschele, Marisa; Moscicki, Michele K; Bloomfield, Laurie L; Sturdy, Christopher B

    2011-01-01

    Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators. Mobbing calls produced in response to smaller, higher-threat predators contain more "D" notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned.

  1. Neural correlates of threat perception: neural equivalence of conspecific and heterospecific mobbing calls is learned.

    Directory of Open Access Journals (Sweden)

    Marc T Avey

    Full Text Available Songbird auditory areas (i.e., CMM and NCM are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators. Mobbing calls produced in response to smaller, higher-threat predators contain more "D" notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned.

  2. Deep learning for steganalysis via convolutional neural networks

    Science.gov (United States)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  3. Unified pre- and postsynaptic long-term plasticity enables reliable and flexible learning.

    Science.gov (United States)

    Costa, Rui Ponte; Froemke, Robert C; Sjöström, P Jesper; van Rossum, Mark Cw

    2015-08-26

    Although it is well known that long-term synaptic plasticity can be expressed both pre- and postsynaptically, the functional consequences of this arrangement have remained elusive. We show that spike-timing-dependent plasticity with both pre- and postsynaptic expression develops receptive fields with reduced variability and improved discriminability compared to postsynaptic plasticity alone. These long-term modifications in receptive field statistics match recent sensory perception experiments. Moreover, learning with this form of plasticity leaves a hidden postsynaptic memory trace that enables fast relearning of previously stored information, providing a cellular substrate for memory savings. Our results reveal essential roles for presynaptic plasticity that are missed when only postsynaptic expression of long-term plasticity is considered, and suggest an experience-dependent distribution of pre- and postsynaptic strength changes.

  4. Maladaptive spinal plasticity opposes spinal learning and recovery in spinal cord injury

    Science.gov (United States)

    Ferguson, Adam R.; Huie, J. Russell; Crown, Eric D.; Baumbauer, Kyle M.; Hook, Michelle A.; Garraway, Sandra M.; Lee, Kuan H.; Hoy, Kevin C.; Grau, James W.

    2012-01-01

    Synaptic plasticity within the spinal cord has great potential to facilitate recovery of function after spinal cord injury (SCI). Spinal plasticity can be induced in an activity-dependent manner even without input from the brain after complete SCI. A mechanistic basis for these effects is provided by research demonstrating that spinal synapses have many of the same plasticity mechanisms that are known to underlie learning and memory in the brain. In addition, the lumbar spinal cord can sustain several forms of learning and memory, including limb-position training. However, not all spinal plasticity promotes recovery of function. Central sensitization of nociceptive (pain) pathways in the spinal cord may emerge in response to various noxious inputs, demonstrating that plasticity within the spinal cord may contribute to maladaptive pain states. In this review we discuss interactions between adaptive and maladaptive forms of activity-dependent plasticity in the spinal cord below the level of SCI. The literature demonstrates that activity-dependent plasticity within the spinal cord must be carefully tuned to promote adaptive spinal training. Prior work from our group has shown that stimulation that is delivered in a limb position-dependent manner or on a fixed interval can induce adaptive plasticity that promotes future spinal cord learning and reduces nociceptive hyper-reactivity. On the other hand, stimulation that is delivered in an unsynchronized fashion, such as randomized electrical stimulation or peripheral skin injuries, can generate maladaptive spinal plasticity that undermines future spinal cord learning, reduces recovery of locomotor function, and promotes nociceptive hyper-reactivity after SCI. We review these basic phenomena, how these findings relate to the broader spinal plasticity literature, discuss the cellular and molecular mechanisms, and finally discuss implications of these and other findings for improved rehabilitative therapies after SCI. PMID

  5. A stochastic learning algorithm for layered neural networks

    International Nuclear Information System (INIS)

    Bartlett, E.B.; Uhrig, R.E.

    1992-01-01

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given

  6. Relabeling exchange method (REM) for learning in neural networks

    Science.gov (United States)

    Wu, Wen; Mammone, Richard J.

    1994-02-01

    The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

  7. Comparison between extreme learning machine and wavelet neural networks in data classification

    Science.gov (United States)

    Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2017-03-01

    Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.

  8. Odor experiences during preimaginal stages cause behavioral and neural plasticity in adult honeybees

    Directory of Open Access Journals (Sweden)

    Gabriela eRamirez

    2016-06-01

    Full Text Available In eusocial insects, experiences acquired during the development have long-term consequences on mature behavior. In the honeybee that suffers profound changes associated with metamorphosis, the effect of odor experiences at larval instars on the subsequent physiological and behavioral response is still unclear. To address the impact of preimaginal experiences on the adult honeybee, colonies containing larvae were fed scented food. The effect of the preimaginal experiences with the food odor was assessed in learning performance, memory retention and generalization in 3-5- and 17-19-day-old bees, in the regulation of their expression of synaptic-related genes and in theperception and morphology of their antennae. Three-5 day old bees that experienced 1-hexanol (1-HEX as food scent responded more to the presentation of the odor during the 1-HEX conditioning than control bees (i.e. bees reared in colonies fed unscented food. Higher levels of PER to 1-HEX in this group also extent to HEXA, the most perceptually similar odor to the experienced one that we tested. These results were not observed for the group tested at older ages. In the brain of young adults, larval experiences triggered similar levels of neurexins and neuroligins expression, two proteins that have been involved in synaptic formation after associative learning. At the sensory periphery, the experience did not alter the number of the olfactory sensilla placoidea, but did reduce the electrical response of the antennae to the experienced and novel odor. Our study provides a new insight into the effects of preimaginal experiences in the honeybee and the mechanisms underlying olfactory plasticity at larval stage of holometabolous insects.

  9. Neural Correlates of Success and Failure Signals During Neurofeedback Learning.

    Science.gov (United States)

    Radua, Joaquim; Stoica, Teodora; Scheinost, Dustin; Pittenger, Christopher; Hampson, Michelle

    2018-05-15

    Feedback-driven learning, observed across phylogeny and of clear adaptive value, is frequently operationalized in simple operant conditioning paradigms, but it can be much more complex, driven by abstract representations of success and failure. This study investigates the neural processes involved in processing success and failure during feedback learning, which are not well understood. Data analyzed were acquired during a multisession neurofeedback experiment in which ten participants were presented with, and instructed to modulate, the activity of their orbitofrontal cortex with the aim of decreasing their anxiety. We assessed the regional blood-oxygenation-level-dependent response to the individualized neurofeedback signals of success and failure across twelve functional runs acquired in two different magnetic resonance sessions in each of ten individuals. Neurofeedback signals of failure correlated early during learning with deactivation in the precuneus/posterior cingulate and neurofeedback signals of success correlated later during learning with deactivation in the medial prefrontal/anterior cingulate cortex. The intensity of the latter deactivations predicted the efficacy of the neurofeedback intervention in the reduction of anxiety. These findings indicate a role for regulation of the default mode network during feedback learning, and suggest a higher sensitivity to signals of failure during the early feedback learning and to signals of success subsequently. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    Science.gov (United States)

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  11. Noise-driven manifestation of learning in mature neural networks

    International Nuclear Information System (INIS)

    Monterola, Christopher; Saloma, Caesar

    2002-01-01

    We show that the generalization capability of a mature thresholding neural network to process above-threshold disturbances in a noise-free environment is extended to subthreshold disturbances by ambient noise without retraining. The ability to benefit from noise is intrinsic and does not have to be learned separately. Nonlinear dependence of sensitivity with noise strength is significantly narrower than in individual threshold systems. Noise has a minimal effect on network performance for above-threshold signals. We resolve two seemingly contradictory responses of trained networks to noise--their ability to benefit from its presence and their robustness against noisy strong disturbances

  12. Supervised learning of probability distributions by neural networks

    Science.gov (United States)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  13. Maladaptive spinal plasticity opposes spinal learning and recovery in spinal cord injury

    Directory of Open Access Journals (Sweden)

    Adam R Ferguson

    2012-10-01

    Full Text Available Synaptic plasticity within the spinal cord has great potential to facilitate recovery of function after spinal cord injury (SCI. Spinal plasticity can be induced in an activity-dependent manner even without input from the brain after complete SCI. The mechanistic basis for these effects is provided by research demonstrating that spinal synapses have many of the same plasticity mechanisms that are known to underlie learning and memory in the brain. In addition, the lumbar spinal cord can sustain several forms of learning and memory, including limb-position training. However, not all spinal plasticity promotes recovery of function. Central sensitization of nociceptive (pain pathways in the spinal cord may emerge with certain patterns of activity, demonstrating that plasticity within the spinal cord may contribute to maladaptive pain states. In this review we discuss interactions between adaptive and maladaptive forms of activity-dependent plasticity in the spinal cord. The literature demonstrates that activity-dependent plasticity within the spinal cord must be carefully tuned to promote adaptive spinal training. Stimulation that is delivered in a limb position-dependent manner or on a fixed interval can induce adaptive plasticity that promotes future spinal cord learning and reduces nociceptive hyper-reactivity. On the other hand, stimulation that is delivered in an unsynchronized fashion, such as randomized electrical stimulation or peripheral skin injuries, can generate maladaptive spinal plasticity that undermines future spinal cord learning, reduces recovery of locomotor function, and promotes nociceptive hyper-reactivity after spinal cord injury. We review these basic phenomena, discuss the cellular and molecular mechanisms, and discuss implications of these findings for improved rehabilitative therapies after spinal cord injury.

  14. Learning of spiking networks with different forms of long-term synaptic plasticity

    International Nuclear Information System (INIS)

    Vlasov, D.S.; Sboev, A.G.; Serenko, A.V.; Rybka, R.B.; Moloshnikov, I.A.

    2016-01-01

    The possibility of modeling the learning process based on different forms of spike timing-dependent plasticity (STDP) has been studied. It has been shown that the learnability depends on the choice of the spike pairing scheme in the STDP rule and the type of the input signal used during learning [ru

  15. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    Science.gov (United States)

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  16. A novel Bayesian learning method for information aggregation in modular neural networks

    DEFF Research Database (Denmark)

    Wang, Pan; Xu, Lida; Zhou, Shang-Ming

    2010-01-01

    Modular neural network is a popular neural network model which has many successful applications. In this paper, a sequential Bayesian learning (SBL) is proposed for modular neural networks aiming at efficiently aggregating the outputs of members of the ensemble. The experimental results on eight...... benchmark problems have demonstrated that the proposed method can perform information aggregation efficiently in data modeling....

  17. Learning Orthographic Structure With Sequential Generative Neural Networks.

    Science.gov (United States)

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  18. Construction of Neural Networks for Realization of Localized Deep Learning

    Directory of Open Access Journals (Sweden)

    Charles K. Chui

    2018-05-01

    Full Text Available The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines. However, theoretical development of deep learning is still at its infancy. The objective of this paper is to introduce a deep neural network (also called deep-net approach to localized manifold learning, with each hidden layer endowed with a specific learning task. For the purpose of illustrations, we only focus on deep-nets with three hidden layers, with the first layer for dimensionality reduction, the second layer for bias reduction, and the third layer for variance reduction. A feedback component is also designed to deal with outliers. The main theoretical result in this paper is the order O(m-2s/(2s+d of approximation of the regression function with regularity s, in terms of the number m of sample points, where the (unknown manifold dimension d replaces the dimension D of the sampling (Euclidean space for shallow nets.

  19. Transfer Learning with Convolutional Neural Networks for SAR Ship Recognition

    Science.gov (United States)

    Zhang, Di; Liu, Jia; Heng, Wang; Ren, Kaijun; Song, Junqiang

    2018-03-01

    Ship recognition is the backbone of marine surveillance systems. Recent deep learning methods, e.g. Convolutional Neural Networks (CNNs), have shown high performance for optical images. Learning CNNs, however, requires a number of annotated samples to estimate numerous model parameters, which prevents its application to Synthetic Aperture Radar (SAR) images due to the limited annotated training samples. Transfer learning has been a promising technique for applications with limited data. To this end, a novel SAR ship recognition method based on CNNs with transfer learning has been developed. In this work, we firstly start with a CNNs model that has been trained in advance on Moving and Stationary Target Acquisition and Recognition (MSTAR) database. Next, based on the knowledge gained from this image recognition task, we fine-tune the CNNs on a new task to recognize three types of ships in the OpenSARShip database. The experimental results show that our proposed approach can obviously increase the recognition rate comparing with the result of merely applying CNNs. In addition, compared to existing methods, the proposed method proves to be very competitive and can learn discriminative features directly from training data instead of requiring pre-specification or pre-selection manually.

  20. Switched-capacitor realization of presynaptic short-term-plasticity and stop-learning synapses in 28 nm CMOS.

    Science.gov (United States)

    Noack, Marko; Partzsch, Johannes; Mayr, Christian G; Hänzsche, Stefan; Scholze, Stefan; Höppner, Sebastian; Ellguth, Georg; Schüffny, Rene

    2015-01-01

    Synaptic dynamics, such as long- and short-term plasticity, play an important role in the complexity and biological realism achievable when running neural networks on a neuromorphic IC. For example, they endow the IC with an ability to adapt and learn from its environment. In order to achieve the millisecond to second time constants required for these synaptic dynamics, analog subthreshold circuits are usually employed. However, due to process variation and leakage problems, it is almost impossible to port these types of circuits to modern sub-100nm technologies. In contrast, we present a neuromorphic system in a 28 nm CMOS process that employs switched capacitor (SC) circuits to implement 128 short term plasticity presynapses as well as 8192 stop-learning synapses. The neuromorphic system consumes an area of 0.36 mm(2) and runs at a power consumption of 1.9 mW. The circuit makes use of a technique for minimizing leakage effects allowing for real-time operation with time constants up to several seconds. Since we rely on SC techniques for all calculations, the system is composed of only generic mixed-signal building blocks. These generic building blocks make the system easy to port between technologies and the large digital circuit part inherent in an SC system benefits fully from technology scaling.

  1. Switched-Capacitor Realization of Presynaptic Short-Term Plasticity and Stop-Learning Synapses in 28 nm CMOS

    Directory of Open Access Journals (Sweden)

    Marko eNoack

    2015-02-01

    Full Text Available Synaptic dynamics, such as long- and short-term plasticity, play an important role in the complexity and biological realism achievable when running neural networks on a neuromorphic IC. For example, they endow the IC with an ability to adapt and learn from its environment. In order to achieve the millisecond to second time constants required for these synaptic dynamics, analog subthreshold circuits are usually employed. However, due to process variation and leakage problems, it is almost impossible to port these types of circuits to modern sub-100nm technologies. In contrast, we present a neuromorphic system in a 28 nm CMOS process that employs switched capacitor (SC circuits to implement 128 short-term plasticity presynapses as well as 8192 stop-learning synapses. The neuromorphic system consumes an area of 0.36 mm² and runs at a power consumption of 1.9 mW. The circuit makes use of a technique for minimizing leakage effects allowing for real-time operation with time constants up to several seconds. Since we rely on SC techniques for all calculations, the system is composed of only generic mixed-signal building blocks. These generic building blocks make the system easy to port between technologies and the large digital circuit part inherent in an SC system benefits fully from technology scaling.

  2. A neural fuzzy controller learning by fuzzy error propagation

    Science.gov (United States)

    Nauck, Detlef; Kruse, Rudolf

    1992-01-01

    In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.

  3. A role for calcium-permeable AMPA receptors in synaptic plasticity and learning.

    Directory of Open Access Journals (Sweden)

    Brian J Wiltgen

    2010-09-01

    Full Text Available A central concept in the field of learning and memory is that NMDARs are essential for synaptic plasticity and memory formation. Surprisingly then, multiple studies have found that behavioral experience can reduce or eliminate the contribution of these receptors to learning. The cellular mechanisms that mediate learning in the absence of NMDAR activation are currently unknown. To address this issue, we examined the contribution of Ca(2+-permeable AMPARs to learning and plasticity in the hippocampus. Mutant mice were engineered with a conditional genetic deletion of GluR2 in the CA1 region of the hippocampus (GluR2-cKO mice. Electrophysiology experiments in these animals revealed a novel form of long-term potentiation (LTP that was independent of NMDARs and mediated by GluR2-lacking Ca(2+-permeable AMPARs. Behavioral analyses found that GluR2-cKO mice were impaired on multiple hippocampus-dependent learning tasks that required NMDAR activation. This suggests that AMPAR-mediated LTP interferes with NMDAR-dependent plasticity. In contrast, NMDAR-independent learning was normal in knockout mice and required the activation of Ca(2+-permeable AMPARs. These results suggest that GluR2-lacking AMPARs play a functional and previously unidentified role in learning; they appear to mediate changes in synaptic strength that occur after plasticity has been established by NMDARs.

  4. Neural dynamics of learning sound-action associations.

    Directory of Open Access Journals (Sweden)

    Adam McNamara

    Full Text Available A motor component is pre-requisite to any communicative act as one must inherently move to communicate. To learn to make a communicative act, the brain must be able to dynamically associate arbitrary percepts to the neural substrate underlying the pre-requisite motor activity. We aimed to investigate whether brain regions involved in complex gestures (ventral pre-motor cortex, Brodmann Area 44 were involved in mediating association between novel abstract auditory stimuli and novel gestural movements. In a functional resonance imaging (fMRI study we asked participants to learn associations between previously unrelated novel sounds and meaningless gestures inside the scanner. We use functional connectivity analysis to eliminate the often present confound of 'strategic covert naming' when dealing with BA44 and to rule out effects of non-specific reductions in signal. Brodmann Area 44, a region incorporating Broca's region showed strong, bilateral, negative correlation of BOLD (blood oxygen level dependent response with learning of sound-action associations during data acquisition. Left-inferior-parietal-lobule (l-IPL and bilateral loci in and around visual area V5, right-orbital-frontal-gyrus, right-hippocampus, left-para-hippocampus, right-head-of-caudate, right-insula and left-lingual-gyrus also showed decreases in BOLD response with learning. Concurrent with these decreases in BOLD response, an increasing connectivity between areas of the imaged network as well as the right-middle-frontal-gyrus with rising learning performance was revealed by a psychophysiological interaction (PPI analysis. The increasing connectivity therefore occurs within an increasingly energy efficient network as learning proceeds. Strongest learning related connectivity between regions was found when analysing BA44 and l-IPL seeds. The results clearly show that BA44 and l-IPL is dynamically involved in linking gesture and sound and therefore provides evidence that one of

  5. Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity.

    Directory of Open Access Journals (Sweden)

    Christian Albers

    Full Text Available Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP. Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious and strong (teacher spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.

  6. Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity.

    Science.gov (United States)

    Albers, Christian; Westkott, Maren; Pawelzik, Klaus

    2016-01-01

    Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.

  7. Learning sequential control in a Neural Blackboard Architecture for in situ concept reasoning

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, Frank; Besold, Tarek R.; Lamb, Luis; Serafini, Luciano; Tabor, Whitney

    2016-01-01

    Simulations are presented and discussed of learning sequential control in a Neural Blackboard Architecture (NBA) for in situ concept-based reasoning. Sequential control is learned in a reservoir network, consisting of columns with neural circuits. This allows the reservoir to control the dynamics of

  8. Spaced Learning Enhances Subsequent Recognition Memory by Reducing Neural Repetition Suppression

    Science.gov (United States)

    Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi

    2011-01-01

    Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half…

  9. Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies

    Science.gov (United States)

    Peng, Yefei

    2010-01-01

    An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…

  10. The teaching-learning process of plastic expression in students with Down syndrome

    Directory of Open Access Journals (Sweden)

    Julio Antonio Conill Armenteros

    2018-03-01

    Full Text Available The drawing constitutes a means through which the child expresses the level of physical, mental, emotional and creative development achieved and plays an important role in the plastic expression. The study took into account the identification of needs in the teaching - learning process of plastic expression in students with Down syndrome, for which a didactic strategy was designed that contains actions of a teaching nature and establishes interdisciplinary links between the different subjects of the curriculum. The investigative process was conducted on a dialectical-materialist basis and methods were used at the theoretical, empirical and statistical-mathematical levels, such as: documentary analysis, interview, drawing technique, among others. Five students with Down syndrome participated in the study of the special school "28 de Enero" of Pinar del Río and the instructor who directs the workshops of creation of Plastic Arts. The investigations allowed to determine the regularities that distinguish the process of teaching - learning of the plastic expression in these students, as well as the needs of the Plastic Arts instructor for the conduction of said process. The didactic strategy allowed the process of teaching - learning of the plastic expression to encourage creativity and the development of motor skills, from the projection of actions that contribute to the diagnosis and treatment of this process in order to achieve the maximum integral development possible and the preparation for the independent adult life of the school student with Down syndrome.

  11. Neural plasticity in hypocretin neurons: the basis of hypocretinergic regulation of physiological and behavioral functions in animals

    Directory of Open Access Journals (Sweden)

    Xiao-Bing eGao

    2015-10-01

    Full Text Available The neuronal system that resides in the perifornical and lateral hypothalamus (Pf/LH and synthesizes the neuropeptide hypocretin/orexin participates in critical brain functions across species from fish to human. The hypocretin system regulates neural activity responsible for daily functions (such as sleep/wake homeostasis, energy balance, appetite, etc and long-term behavioral changes (such as reward seeking and addiction, stress response, etc in animals. The most recent evidence suggests that the hypocretin system undergoes substantial plastic changes in response to both daily fluctuations (such as food intake and sleep-wake regulation and long-term changes (such as cocaine seeking in neuronal activity in the brain. The understanding of these changes in the hypocretin system is essential in addressing the role of the hypocretin system in normal physiological functions and pathological conditions in animals and humans. In this review, the evidence demonstrating that neural plasticity occurs in hypocretin-containing neurons in the Pf/LH will be presented and possible physiological behavioral, and mental health implications of these findings will be discussed.

  12. Neural plasticity in hypocretin neurons: the basis of hypocretinergic regulation of physiological and behavioral functions in animals

    Science.gov (United States)

    Gao, Xiao-Bing; Hermes, Gretchen

    2015-01-01

    The neuronal system that resides in the perifornical and lateral hypothalamus (Pf/LH) and synthesizes the neuropeptide hypocretin/orexin participates in critical brain functions across species from fish to human. The hypocretin system regulates neural activity responsible for daily functions (such as sleep/wake homeostasis, energy balance, appetite, etc.) and long-term behavioral changes (such as reward seeking and addiction, stress response, etc.) in animals. The most recent evidence suggests that the hypocretin system undergoes substantial plastic changes in response to both daily fluctuations (such as food intake and sleep-wake regulation) and long-term changes (such as cocaine seeking) in neuronal activity in the brain. The understanding of these changes in the hypocretin system is essential in addressing the role of the hypocretin system in normal physiological functions and pathological conditions in animals and humans. In this review, the evidence demonstrating that neural plasticity occurs in hypocretin-containing neurons in the Pf/LH will be presented and possible physiological, behavioral, and mental health implications of these findings will be discussed. PMID:26539086

  13. Neural correlates of contextual cueing are modulated by explicit learning.

    Science.gov (United States)

    Westerberg, Carmen E; Miller, Brennan B; Reber, Paul J; Cohen, Neal J; Paller, Ken A

    2011-10-01

    Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer's knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Synaptic plasticity, memory and the hippocampus: a neural network approach to causality.

    Science.gov (United States)

    Neves, Guilherme; Cooke, Sam F; Bliss, Tim V P

    2008-01-01

    Two facts about the hippocampus have been common currency among neuroscientists for several decades. First, lesions of the hippocampus in humans prevent the acquisition of new episodic memories; second, activity-dependent synaptic plasticity is a prominent feature of hippocampal synapses. Given this background, the hypothesis that hippocampus-dependent memory is mediated, at least in part, by hippocampal synaptic plasticity has seemed as cogent in theory as it has been difficult to prove in practice. Here we argue that the recent development of transgenic molecular devices will encourage a shift from mechanistic investigations of synaptic plasticity in single neurons towards an analysis of how networks of neurons encode and represent memory, and we suggest ways in which this might be achieved. In the process, the hypothesis that synaptic plasticity is necessary and sufficient for information storage in the brain may finally be validated.

  15. The frog vestibular system as a model for lesion-induced plasticity: basic neural principles and implications for posture control

    Directory of Open Access Journals (Sweden)

    Francois M Lambert

    2012-04-01

    Full Text Available Studies of behavioral consequences after unilateral labyrinthectomy have a long tradition in the quest of determining rules and limitations of the CNS to exert plastic changes that assist the recuperation from the loss of sensory inputs. Frogs were among the first animal models to illustrate general principles of regenerative capacity and reorganizational neural flexibility after a vestibular lesion. The continuous successful use of the latter animals is in part based on the easy access and identifiability of nerve branches to inner ear organs for surgical intervention, the possibility to employ whole brain preparations for in vitro studies and the limited degree of freedom of postural reflexes for quantification of behavioral impairments and subsequent improvements. Major discoveries that increased the knowledge of post-lesional reactive mechanisms in the central nervous system include alterations in vestibular commissural signal processing and activation of cooperative changes in excitatory and inhibitory inputs to disfacilitated neurons. Moreover, the observed increase of synaptic efficacy in propriospinal circuits illustrates the importance of limb proprioceptive inputs for postural recovery. Accumulated evidence suggests that the lesion-induced neural plasticity is not a goal-directed process that aims towards a meaningful restoration of vestibular reflexes but rather attempts a survival of those neurons that have lost their excitatory inputs. Accordingly, the reaction mechanism causes an improvement of some components but also a deterioration of other aspects as seen by spatio-temporally inappropriate vestibulo-motor responses, similar to the consequences of plasticity processes in various sensory systems and species. The generality of the findings indicate that frogs continue to form a highly amenable vertebrate model system for exploring molecular and physiological events during cellular and network reorganization after a loss of

  16. Neuroticism and conscientiousness respectively constrain and facilitate short-term plasticity within the working memory neural network.

    Science.gov (United States)

    Dima, Danai; Friston, Karl J; Stephan, Klaas E; Frangou, Sophia

    2015-10-01

    Individual differences in cognitive efficiency, particularly in relation to working memory (WM), have been associated both with personality dimensions that reflect enduring regularities in brain configuration, and with short-term neural plasticity, that reflects task-related changes in brain connectivity. To elucidate the relationship of these two divergent mechanisms, we tested the hypothesis that personality dimensions, which reflect enduring aspects of brain configuration, inform about the neurobiological framework within which short-term, task-related plasticity, as measured by effective connectivity, can be facilitated or constrained. As WM consistently engages the dorsolateral prefrontal (DLPFC), parietal (PAR), and anterior cingulate cortex (ACC), we specified a WM network model with bidirectional, ipsilateral, and contralateral connections between these regions from a functional magnetic resonance imaging dataset obtained from 40 healthy adults while performing the 3-back WM task. Task-related effective connectivity changes within this network were estimated using Dynamic Causal Modelling. Personality was evaluated along the major dimensions of Neuroticism, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness. Only two dimensions were relevant to task-dependent effective connectivity. Neuroticism and Conscientiousness respectively constrained and facilitated neuroplastic responses within the WM network. These results suggest individual differences in cognitive efficiency arise from the interplay between enduring and short-term plasticity in brain configuration. © 2015 Wiley Periodicals, Inc.

  17. Neural modularity helps organisms evolve to learn new skills without forgetting old skills.

    Science.gov (United States)

    Ellefsen, Kai Olav; Mouret, Jean-Baptiste; Clune, Jeff

    2015-04-01

    A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to

  18. Neural modularity helps organisms evolve to learn new skills without forgetting old skills.

    Directory of Open Access Journals (Sweden)

    Kai Olav Ellefsen

    2015-04-01

    Full Text Available A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand. To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1 that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2 that one benefit of the modularity ubiquitous in the brains of natural animals

  19. Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills

    Science.gov (United States)

    Ellefsen, Kai Olav; Mouret, Jean-Baptiste; Clune, Jeff

    2015-01-01

    A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to

  20. Altered synaptic plasticity in Tourette's syndrome and its relationship to motor skill learning.

    Directory of Open Access Journals (Sweden)

    Valerie Cathérine Brandt

    Full Text Available Gilles de la Tourette syndrome is a neuropsychiatric disorder characterized by motor and phonic tics that can be considered motor responses to preceding inner urges. It has been shown that Tourette patients have inferior performance in some motor learning tasks and reduced synaptic plasticity induced by transcranial magnetic stimulation. However, it has not been investigated whether altered synaptic plasticity is directly linked to impaired motor skill acquisition in Tourette patients. In this study, cortical plasticity was assessed by measuring motor-evoked potentials before and after paired associative stimulation in 14 Tourette patients (13 male; age 18-39 and 15 healthy controls (12 male; age 18-33. Tic and urge severity were assessed using the Yale Global Tic Severity Scale and the Premonitory Urges for Tics Scale. Motor learning was assessed 45 minutes after inducing synaptic plasticity and 9 months later, using the rotary pursuit task. On average, long-term potentiation-like effects in response to the paired associative stimulation were present in healthy controls but not in patients. In Tourette patients, long-term potentiation-like effects were associated with more and long-term depression-like effects with less severe urges and tics. While motor learning did not differ between patients and healthy controls 45 minutes after inducing synaptic plasticity, the learning curve of the healthy controls started at a significantly higher level than the Tourette patients' 9 months later. Induced synaptic plasticity correlated positively with motor skills in healthy controls 9 months later. The present study confirms previously found long-term improvement in motor performance after paired associative stimulation in healthy controls but not in Tourette patients. Tourette patients did not show long-term potentiation in response to PAS and also showed reduced levels of motor skill consolidation after 9 months compared to healthy controls. Moreover

  1. A Computational Model of the Temporal Dynamics of Plasticity in Procedural Learning: Sensitivity to Feedback Timing

    Directory of Open Access Journals (Sweden)

    Vivian V. Valentin

    2014-07-01

    Full Text Available The evidence is now good that different memory systems mediate the learning of different types of category structures. In particular, declarative memory dominates rule-based (RB category learning and procedural memory dominates information-integration (II category learning. For example, several studies have reported that feedback timing is critical for II category learning, but not for RB category learning – results that have broad support within the memory systems literature. Specifically, II category learning has been shown to be best with feedback delays of 500ms compared to delays of 0 and 1000ms, and highly impaired with delays of 2.5 seconds or longer. In contrast, RB learning is unaffected by any feedback delay up to 10 seconds. We propose a neurobiologically detailed theory of procedural learning that is sensitive to different feedback delays. The theory assumes that procedural learning is mediated by plasticity at cortical-striatal synapses that are modified by dopamine-mediated reinforcement learning. The model captures the time-course of the biochemical events in the striatum that cause synaptic plasticity, and thereby accounts for the empirical effects of various feedback delays on II category learning.

  2. Learning, memory, and the role of neural network architecture.

    Directory of Open Access Journals (Sweden)

    Ann M Hermundstad

    2011-06-01

    Full Text Available The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  3. Oscillations, neural computations and learning during wake and sleep.

    Science.gov (United States)

    Penagos, Hector; Varela, Carmen; Wilson, Matthew A

    2017-06-01

    Learning and memory theories consider sleep and the reactivation of waking hippocampal neural patterns to be crucial for the long-term consolidation of memories. Here we propose that precisely coordinated representations across brain regions allow the inference and evaluation of causal relationships to train an internal generative model of the world. This training starts during wakefulness and strongly benefits from sleep because its recurring nested oscillations may reflect compositional operations that facilitate a hierarchical processing of information, potentially including behavioral policy evaluations. This suggests that an important function of sleep activity is to provide conditions conducive to general inference, prediction and insight, which contribute to a more robust internal model that underlies generalization and adaptive behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Neural architecture design based on extreme learning machine.

    Science.gov (United States)

    Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis

    2013-12-01

    Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Supervised learning in spiking neural networks with FORCE training.

    Science.gov (United States)

    Nicola, Wilten; Clopath, Claudia

    2017-12-20

    Populations of neurons display an extraordinary diversity in the behaviors they affect and display. Machine learning techniques have recently emerged that allow us to create networks of model neurons that display behaviors of similar complexity. Here we demonstrate the direct applicability of one such technique, the FORCE method, to spiking neural networks. We train these networks to mimic dynamical systems, classify inputs, and store discrete sequences that correspond to the notes of a song. Finally, we use FORCE training to create two biologically motivated model circuits. One is inspired by the zebra finch and successfully reproduces songbird singing. The second network is motivated by the hippocampus and is trained to store and replay a movie scene. FORCE trained networks reproduce behaviors comparable in complexity to their inspired circuits and yield information not easily obtainable with other techniques, such as behavioral responses to pharmacological manipulations and spike timing statistics.

  6. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    Science.gov (United States)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  7. Brain Plasticity and the Art of Teaching to Learn

    Science.gov (United States)

    Martinez, Margaret

    2005-01-01

    "Everyone thinks of changing the world, but no one thinks of changing himself, "wrote Leo Tolstoy. Have you ever thought about how learning changes your brain? If yes, this paper may help you explore the research that will change our learning landscape in the next few years! Recent developers in the neurosciences and education research…

  8. Searching for learning-dependent changes in the antennal lobe: simultaneous recording of neural activity and aversive olfactory learning in honeybees

    Directory of Open Access Journals (Sweden)

    Edith Roussel

    2010-09-01

    Full Text Available Plasticity in the honeybee brain has been studied using the appetitive olfactory conditioning of the proboscis extension reflex, in which a bee learns the association between an odor and a sucrose reward. In this framework, coupling behavioral measurements of proboscis extension and invasive recordings of neural activity has been difficult because proboscis movements usually introduce brain movements that affect physiological preparations. Here we took advantage of a new conditioning protocol, the aversive olfactory conditioning of the sting extension reflex, which does not generate this problem. We achieved the first simultaneous recordings of conditioned sting extension responses and calcium imaging of antennal lobe activity, thus revealing on-line processing of olfactory information during conditioning trials. Based on behavioral output we distinguished learners and non-learners and analyzed possible learning-dependent changes in antennal lobe activity. We did not find differences between glomerular responses to the CS+ and the CS- in learners. Unexpectedly, we found that during conditioning trials non-learners exhibited a progressive decrease in physiological responses to odors, irrespective of their valence. This effect could neither be attributed to a fitness problem nor to abnormal dye bleaching. We discuss the absence of learning-induced changes in the antennal lobe of learners and the decrease in calcium responses found in non-learners. Further studies will have to extend the search for functional plasticity related to aversive learning to other brain areas and to look on a broader range of temporal scales

  9. Spiking Neural Networks with Unsupervised Learning Based on STDP Using Resistive Synaptic Devices and Analog CMOS Neuron Circuit.

    Science.gov (United States)

    Kwon, Min-Woo; Baek, Myung-Hyun; Hwang, Sungmin; Kim, Sungjun; Park, Byung-Gook

    2018-09-01

    We designed the CMOS analog integrate and fire (I&F) neuron circuit can drive resistive synaptic device. The neuron circuit consists of a current mirror for spatial integration, a capacitor for temporal integration, asymmetric negative and positive pulse generation part, a refractory part, and finally a back-propagation pulse generation part for learning of the synaptic devices. The resistive synaptic devices were fabricated using HfOx switching layer by atomic layer deposition (ALD). The resistive synaptic device had gradual set and reset characteristics and the conductance was adjusted by spike-timing-dependent-plasticity (STDP) learning rule. We carried out circuit simulation of synaptic device and CMOS neuron circuit. And we have developed an unsupervised spiking neural networks (SNNs) for 5 × 5 pattern recognition and classification using the neuron circuit and synaptic devices. The hardware-based SNNs can autonomously and efficiently control the weight updates of the synapses between neurons, without the aid of software calculations.

  10. Random neural Q-learning for obstacle avoidance of a mobile robot in unknown environments

    Directory of Open Access Journals (Sweden)

    Jing Yang

    2016-07-01

    Full Text Available The article presents a random neural Q-learning strategy for the obstacle avoidance problem of an autonomous mobile robot in unknown environments. In the proposed strategy, two independent modules, namely, avoidance without considering the target and goal-seeking without considering obstacles, are first trained using the proposed random neural Q-learning algorithm to obtain their best control policies. Then, the two trained modules are combined based on a switching function to realize the obstacle avoidance in unknown environments. For the proposed random neural Q-learning algorithm, a single-hidden layer feedforward network is used to approximate the Q-function to estimate the Q-value. The parameters of the single-hidden layer feedforward network are modified using the recently proposed neural algorithm named the online sequential version of extreme learning machine, where the parameters of the hidden nodes are assigned randomly and the sample data can come one by one. However, different from the original online sequential version of extreme learning machine algorithm, the initial output weights are estimated subjected to quadratic inequality constraint to improve the convergence speed. Finally, the simulation results demonstrate that the proposed random neural Q-learning strategy can successfully solve the obstacle avoidance problem. Also, the higher learning efficiency and better generalization ability are achieved by the proposed random neural Q-learning algorithm compared with the Q-learning based on the back-propagation method.

  11. Three-terminal ferroelectric synapse device with concurrent learning function for artificial neural networks

    International Nuclear Information System (INIS)

    Nishitani, Y.; Kaneko, Y.; Ueda, M.; Fujii, E.; Morie, T.

    2012-01-01

    Spike-timing-dependent synaptic plasticity (STDP) is demonstrated in a synapse device based on a ferroelectric-gate field-effect transistor (FeFET). STDP is a key of the learning functions observed in human brains, where the synaptic weight changes only depending on the spike timing of the pre- and post-neurons. The FeFET is composed of the stacked oxide materials with ZnO/Pr(Zr,Ti)O 3 (PZT)/SrRuO 3 . In the FeFET, the channel conductance can be altered depending on the density of electrons induced by the polarization of PZT film, which can be controlled by applying the gate voltage in a non-volatile manner. Applying a pulse gate voltage enables the multi-valued modulation of the conductance, which is expected to be caused by a change in PZT polarization. This variation depends on the height and the duration of the pulse gate voltage. Utilizing these characteristics, symmetric and asymmetric STDP learning functions are successfully implemented in the FeFET-based synapse device by applying the non-linear pulse gate voltage generated from a set of two pulses in a sampling circuit, in which the two pulses correspond to the spikes from the pre- and post-neurons. The three-terminal structure of the synapse device enables the concurrent learning, in which the weight update can be performed without canceling signal transmission among neurons, while the neural networks using the previously reported two-terminal synapse devices need to stop signal transmission for learning.

  12. Music mnemonics aid Verbal Memory and Induce Learning – Related Brain Plasticity in Multiple Sclerosis

    OpenAIRE

    Thaut, Michael H.; Peterson, David A.; McIntosh, Gerald C.; Hoemberg, Volker

    2014-01-01

    Recent research on music and brain function has suggested that the temporal pattern structure in music and rhythm can enhance cognitive functions. To further elucidate this question specifically for memory, we investigated if a musical template can enhance verbal learning in patients with multiple sclerosis (MS) and if music-assisted learning will also influence short-term, system-level brain plasticity. We measured systems-level brain activity with oscillatory network synchronization during ...

  13. Learning and Memory, Part II: Molecular Mechanisms of Synaptic Plasticity

    Science.gov (United States)

    Lombroso, Paul; Ogren, Marilee

    2009-01-01

    The molecular events that are responsible for strengthening synaptic connections and how these are linked to memory and learning are discussed. The laboratory preparations that allow the investigation of these events are also described.

  14. Emergence of slow collective oscillations in neural networks with spike-timing dependent plasticity

    DEFF Research Database (Denmark)

    Mikkelsen, Kaare; Imparato, Alberto; Torcini, Alessandro

    2013-01-01

    The collective dynamics of excitatory pulse coupled neurons with spike timing dependent plasticity (STDP) is studied. The introduction of STDP induces persistent irregular oscillations between strongly and weakly synchronized states, reminiscent of brain activity during slow-wave sleep. We explain...

  15. Neural Plasticity in Functional and Anatomical MRI Studies of Children with Tourette Syndrome

    Directory of Open Access Journals (Sweden)

    Heike Eichele

    2013-01-01

    Full Text Available Background: Tourette syndrome (TS is a neuropsychiatric disorder with childhood onset characterized by chronic motor and vocal tics. The typical clinical course of an attenuation of symptoms during adolescence in parallel with the emerging self-regulatory control during development suggests that plastic processes may play an important role in the development of tic symptoms.

  16. Future health care applications resulting from progress in the neurosciences: The significance of neural plasticity research

    NARCIS (Netherlands)

    Gelijns, A.C.; Graaff, P.J.; Lopes da Silva, F.A.; Gispen, W.H.

    1987-01-01

    Neurological, communicative and behavioral disorders afflict a significant part of the population in industrialized countries, and these disorders can be expected to gain in importance in the coming decades. In a considerable number of these dis-orders impairments in plasticity, i.e. deficiencies in

  17. Unsupervised learning in neural networks with short range synapses

    Science.gov (United States)

    Brunnet, L. G.; Agnes, E. J.; Mizusaki, B. E. P.; Erichsen, R., Jr.

    2013-01-01

    Different areas of the brain are involved in specific aspects of the information being processed both in learning and in memory formation. For example, the hippocampus is important in the consolidation of information from short-term memory to long-term memory, while emotional memory seems to be dealt by the amygdala. On the microscopic scale the underlying structures in these areas differ in the kind of neurons involved, in their connectivity, or in their clustering degree but, at this level, learning and memory are attributed to neuronal synapses mediated by longterm potentiation and long-term depression. In this work we explore the properties of a short range synaptic connection network, a nearest neighbor lattice composed mostly by excitatory neurons and a fraction of inhibitory ones. The mechanism of synaptic modification responsible for the emergence of memory is Spike-Timing-Dependent Plasticity (STDP), a Hebbian-like rule, where potentiation/depression is acquired when causal/non-causal spikes happen in a synapse involving two neurons. The system is intended to store and recognize memories associated to spatial external inputs presented as simple geometrical forms. The synaptic modifications are continuously applied to excitatory connections, including a homeostasis rule and STDP. In this work we explore the different scenarios under which a network with short range connections can accomplish the task of storing and recognizing simple connected patterns.

  18. Biomimetic Hybrid Feedback Feedforward Neural-Network Learning Control.

    Science.gov (United States)

    Pan, Yongping; Yu, Haoyong

    2017-06-01

    This brief presents a biomimetic hybrid feedback feedforward neural-network learning control (NNLC) strategy inspired by the human motor learning control mechanism for a class of uncertain nonlinear systems. The control structure includes a proportional-derivative controller acting as a feedback servo machine and a radial-basis-function (RBF) NN acting as a feedforward predictive machine. Under the sufficient constraints on control parameters, the closed-loop system achieves semiglobal practical exponential stability, such that an accurate NN approximation is guaranteed in a local region along recurrent reference trajectories. Compared with the existing NNLC methods, the novelties of the proposed method include: 1) the implementation of an adaptive NN control to guarantee plant states being recurrent is not needed, since recurrent reference signals rather than plant states are utilized as NN inputs, which greatly simplifies the analysis and synthesis of the NNLC and 2) the domain of NN approximation can be determined a priori by the given reference signals, which leads to an easy construction of the RBF-NNs. Simulation results have verified the effectiveness of this approach.

  19. A Telescopic Binary Learning Machine for Training Neural Networks.

    Science.gov (United States)

    Brunato, Mauro; Battiti, Roberto

    2017-03-01

    This paper proposes a new algorithm based on multiscale stochastic local search with binary representation for training neural networks [binary learning machine (BLM)]. We study the effects of neighborhood evaluation strategies, the effect of the number of bits per weight and that of the maximum weight range used for mapping binary strings to real values. Following this preliminary investigation, we propose a telescopic multiscale version of local search, where the number of bits is increased in an adaptive manner, leading to a faster search and to local minima of better quality. An analysis related to adapting the number of bits in a dynamic way is presented. The control on the number of bits, which happens in a natural manner in the proposed method, is effective to increase the generalization performance. The learning dynamics are discussed and validated on a highly nonlinear artificial problem and on real-world tasks in many application domains; BLM is finally applied to a problem requiring either feedforward or recurrent architectures for feedback control.

  20. Uncovering the neural mechanisms underlying learning from tests.

    Directory of Open Access Journals (Sweden)

    Xiaonan L Liu

    Full Text Available People learn better when re-study opportunities are replaced with tests. While researchers have begun to speculate on why testing is superior to study, few studies have directly examined the neural underpinnings of this effect. In this fMRI study, participants engaged in a study phase to learn arbitrary word pairs, followed by a cued recall test (recall second half of pair when cued with first word of pair, re-study of each pair, and finally another cycle of cued recall tests. Brain activation patterns during the first test (recall of the studied pairs predicts performance on the second test. Importantly, while subsequent memory analyses of encoding trials also predict later accuracy, the brain regions involved in predicting later memory success are more extensive for activity during retrieval (testing than during encoding (study. Those additional regions that predict subsequent memory based on their activation at test but not at encoding may be key to understanding the basis of the testing effect.

  1. Memory and learning in a class of neural network models

    International Nuclear Information System (INIS)

    Wallace, D.J.

    1986-01-01

    The author discusses memory and learning properties of the neural network model now identified with Hopfield's work. The model, how it attempts to abstract some key features of the nervous system, and the sense in which learning and memory are identified in the model are described. A brief report is presented on the important role of phase transitions in the model and their implications for memory capacity. The results of numerical simulations obtained using the ICL Distributed Array Processors at Edinburgh are presented. A summary is presented on how the fraction of images which are perfectly stored, depends on the number of nodes and the number of nominal images which one attempts to store using the prescription in Hopfield's paper. Results are presented on the second phase transition in the model, which corresponds to almost total loss of storage capacity as the number of nominal images is increased. Results are given on the performance of a new iterative algorithm for exact storage of up to N images in an N node model

  2. Convolutional neural network with transfer learning for rice type classification

    Science.gov (United States)

    Patel, Vaibhav Amit; Joshi, Manjunath V.

    2018-04-01

    Presently, rice type is identified manually by humans, which is time consuming and error prone. Therefore, there is a need to do this by machine which makes it faster with greater accuracy. This paper proposes a deep learning based method for classification of rice types. We propose two methods to classify the rice types. In the first method, we train a deep convolutional neural network (CNN) using the given segmented rice images. In the second method, we train a combination of a pretrained VGG16 network and the proposed method, while using transfer learning in which the weights of a pretrained network are used to achieve better accuracy. Our approach can also be used for classification of rice grain as broken or fine. We train a 5-class model for classifying rice types using 4000 training images and another 2- class model for the classification of broken and normal rice using 1600 training images. We observe that despite having distinct rice images, our architecture, pretrained on ImageNet data boosts classification accuracy significantly.

  3. Biosignals learning and synthesis using deep neural networks.

    Science.gov (United States)

    Belo, David; Rodrigues, João; Vaz, João R; Pezarat-Correia, Pedro; Gamboa, Hugo

    2017-09-25

    Modeling physiological signals is a complex task both for understanding and synthesize biomedical signals. We propose a deep neural network model that learns and synthesizes biosignals, validated by the morphological equivalence of the original ones. This research could lead the creation of novel algorithms for signal reconstruction in heavily noisy data and source detection in biomedical engineering field. The present work explores the gated recurrent units (GRU) employed in the training of respiration (RESP), electromyograms (EMG) and electrocardiograms (ECG). Each signal is pre-processed, segmented and quantized in a specific number of classes, corresponding to the amplitude of each sample and fed to the model, which is composed by an embedded matrix, three GRU blocks and a softmax function. This network is trained by adjusting its internal parameters, acquiring the representation of the abstract notion of the next value based on the previous ones. The simulated signal was generated by forecasting a random value and re-feeding itself. The resulting generated signals are similar with the morphological expression of the originals. During the learning process, after a set of iterations, the model starts to grasp the basic morphological characteristics of the signal and later their cyclic characteristics. After training, these models' prediction are closer to the signals that trained them, specially the RESP and ECG. This synthesis mechanism has shown relevant results that inspire the use to characterize signals from other physiological sources.

  4. Neuromorphic adaptive plastic scalable electronics: analog learning systems.

    Science.gov (United States)

    Srinivasa, Narayan; Cruz-Albrecht, Jose

    2012-01-01

    Decades of research to build programmable intelligent machines have demonstrated limited utility in complex, real-world environments. Comparing their performance with biological systems, these machines are less efficient by a factor of 1 million1 billion in complex, real-world environments. The Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program is a multifaceted Defense Advanced Research Projects Agency (DARPA) project that seeks to break the programmable machine paradigm and define a new path for creating useful, intelligent machines. Since real-world systems exhibit infinite combinatorial complexity, electronic neuromorphic machine technology would be preferable in a host of applications, but useful and practical implementations still do not exist. HRL Laboratories LLC has embarked on addressing these challenges, and, in this article, we provide an overview of our project and progress made thus far.

  5. Learning-Dependent Plasticity of the Barrel Cortex Is Impaired by Restricting GABA-Ergic Transmission.

    Science.gov (United States)

    Posluszny, Anna; Liguz-Lecznar, Monika; Turzynska, Danuta; Zakrzewska, Renata; Bielecki, Maksymilian; Kossut, Malgorzata

    2015-01-01

    Experience-induced plastic changes in the cerebral cortex are accompanied by alterations in excitatory and inhibitory transmission. Increased excitatory drive, necessary for plasticity, precedes the occurrence of plastic change, while decreased inhibitory signaling often facilitates plasticity. However, an increase of inhibitory interactions was noted in some instances of experience-dependent changes. We previously reported an increase in the number of inhibitory markers in the barrel cortex of mice after fear conditioning engaging vibrissae, observed concurrently with enlargement of the cortical representational area of the row of vibrissae receiving conditioned stimulus (CS). We also observed that an increase of GABA level accompanied the conditioning. Here, to find whether unaltered GABAergic signaling is necessary for learning-dependent rewiring in the murine barrel cortex, we locally decreased GABA production in the barrel cortex or reduced transmission through GABAA receptors (GABAARs) at the time of the conditioning. Injections of 3-mercaptopropionic acid (3-MPA), an inhibitor of glutamic acid decarboxylase (GAD), into the barrel cortex prevented learning-induced enlargement of the conditioned vibrissae representation. A similar effect was observed after injection of gabazine, an antagonist of GABAARs. At the behavioral level, consistent conditioned response (cessation of head movements in response to CS) was impaired. These results show that appropriate functioning of the GABAergic system is required for both manifestation of functional cortical representation plasticity and for the development of a conditioned response.

  6. A Multiple-Plasticity Spiking Neural Network Embedded in a Closed-Loop Control System to Model Cerebellar Pathologies.

    Science.gov (United States)

    Geminiani, Alice; Casellato, Claudia; Antonietti, Alberto; D'Angelo, Egidio; Pedrocchi, Alessandra

    2018-06-01

    The cerebellum plays a crucial role in sensorimotor control and cerebellar disorders compromise adaptation and learning of motor responses. However, the link between alterations at network level and cerebellar dysfunction is still unclear. In principle, this understanding would benefit of the development of an artificial system embedding the salient neuronal and plastic properties of the cerebellum and operating in closed-loop. To this aim, we have exploited a realistic spiking computational model of the cerebellum to analyze the network correlates of cerebellar impairment. The model was modified to reproduce three different damages of the cerebellar cortex: (i) a loss of the main output neurons (Purkinje Cells), (ii) a lesion to the main cerebellar afferents (Mossy Fibers), and (iii) a damage to a major mechanism of synaptic plasticity (Long Term Depression). The modified network models were challenged with an Eye-Blink Classical Conditioning test, a standard learning paradigm used to evaluate cerebellar impairment, in which the outcome was compared to reference results obtained in human or animal experiments. In all cases, the model reproduced the partial and delayed conditioning typical of the pathologies, indicating that an intact cerebellar cortex functionality is required to accelerate learning by transferring acquired information to the cerebellar nuclei. Interestingly, depending on the type of lesion, the redistribution of synaptic plasticity and response timing varied greatly generating specific adaptation patterns. Thus, not only the present work extends the generalization capabilities of the cerebellar spiking model to pathological cases, but also predicts how changes at the neuronal level are distributed across the network, making it usable to infer cerebellar circuit alterations occurring in cerebellar pathologies.

  7. Cognitive and Neural Plasticity in Older Adults’ Prospective Memory Following Training with the Virtual Week Computer Game

    Directory of Open Access Journals (Sweden)

    Nathan S Rose

    2015-10-01

    Full Text Available Prospective memory (PM – the ability to remember and successfully execute our intentions and planned activities – is critical for functional independence and declines with age, yet few studies have attempted to train PM in older adults. We developed a PM training program using the Virtual Week computer game. Trained participants played the game in twelve, 1-hour sessions over one month. Measures of neuropsychological functions, lab-based PM, event-related potentials (ERPs during performance on a lab-based PM task, instrumental activities of daily living, and real-world PM were assessed before and after training. Performance was compared to both no-contact and active (music training control groups. PM on the Virtual Week game dramatically improved following training relative to controls, suggesting PM plasticity is preserved in older adults. Relative to control participants, training did not produce reliable transfer to laboratory-based tasks, but was associated with a reduction of an ERP component (sustained negativity over occipito-parietal cortex associated with processing PM cues, indicative of more automatic PM retrieval. Most importantly, training produced far transfer to real-world outcomes including improvements in performance on real-world PM and activities of daily living. Real-world gains were not observed in either control group. Our findings demonstrate that short-term training with the Virtual Week game produces cognitive and neural plasticity that may result in real-world benefits to supporting functional independence in older adulthood.

  8. Training the brain: practical applications of neural plasticity from the intersection of cognitive neuroscience, developmental psychology, and prevention science.

    Science.gov (United States)

    Bryck, Richard L; Fisher, Philip A

    2012-01-01

    Prior researchers have shown that the brain has a remarkable ability for adapting to environmental changes. The positive effects of such neural plasticity include enhanced functioning in specific cognitive domains and shifts in cortical representation following naturally occurring cases of sensory deprivation; however, maladaptive changes in brain function and development owing to early developmental adversity and stress have also been well documented. Researchers examining enriched rearing environments in animals have revealed the potential for inducing positive brain plasticity effects and have helped to popularize methods for training the brain to reverse early brain deficits or to boost normal cognitive functioning. In this article, two classes of empirically based methods of brain training in children are reviewed and critiqued: laboratory-based, mental process training paradigms and ecological interventions based upon neurocognitive conceptual models. Given the susceptibility of executive function disruption, special attention is paid to training programs that emphasize executive function enhancement. In addition, a third approach to brain training, aimed at tapping into compensatory processes, is postulated. Study results showing the effectiveness of this strategy in the field of neurorehabilitation and in terms of naturally occurring compensatory processing in human aging lend credence to the potential of this approach. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  9. Cognitive and neural plasticity in older adults' prospective memory following training with the Virtual Week computer game.

    Science.gov (United States)

    Rose, Nathan S; Rendell, Peter G; Hering, Alexandra; Kliegel, Matthias; Bidelman, Gavin M; Craik, Fergus I M

    2015-01-01

    Prospective memory (PM) - the ability to remember and successfully execute our intentions and planned activities - is critical for functional independence and declines with age, yet few studies have attempted to train PM in older adults. We developed a PM training program using the Virtual Week computer game. Trained participants played the game in 12, 1-h sessions over 1 month. Measures of neuropsychological functions, lab-based PM, event-related potentials (ERPs) during performance on a lab-based PM task, instrumental activities of daily living, and real-world PM were assessed before and after training. Performance was compared to both no-contact and active (music training) control groups. PM on the Virtual Week game dramatically improved following training relative to controls, suggesting PM plasticity is preserved in older adults. Relative to control participants, training did not produce reliable transfer to laboratory-based tasks, but was associated with a reduction of an ERP component (sustained negativity over occipito-parietal cortex) associated with processing PM cues, indicative of more automatic PM retrieval. Most importantly, training produced far transfer to real-world outcomes including improvements in performance on real-world PM and activities of daily living. Real-world gains were not observed in either control group. Our findings demonstrate that short-term training with the Virtual Week game produces cognitive and neural plasticity that may result in real-world benefits to supporting functional independence in older adulthood.

  10. Learning-dependent plasticity with and without training in the human brain.

    Science.gov (United States)

    Zhang, Jiaxiang; Kourtzi, Zoe

    2010-07-27

    Long-term experience through development and evolution and shorter-term training in adulthood have both been suggested to contribute to the optimization of visual functions that mediate our ability to interpret complex scenes. However, the brain plasticity mechanisms that mediate the detection of objects in cluttered scenes remain largely unknown. Here, we combine behavioral and functional MRI (fMRI) measurements to investigate the human-brain mechanisms that mediate our ability to learn statistical regularities and detect targets in clutter. We show two different routes to visual learning in clutter with discrete brain plasticity signatures. Specifically, opportunistic learning of regularities typical in natural contours (i.e., collinearity) can occur simply through frequent exposure, generalize across untrained stimulus features, and shape processing in occipitotemporal regions implicated in the representation of global forms. In contrast, learning to integrate discontinuities (i.e., elements orthogonal to contour paths) requires task-specific training (bootstrap-based learning), is stimulus-dependent, and enhances processing in intraparietal regions implicated in attention-gated learning. We propose that long-term experience with statistical regularities may facilitate opportunistic learning of collinear contours, whereas learning to integrate discontinuities entails bootstrap-based training for the detection of contours in clutter. These findings provide insights in understanding how long-term experience and short-term training interact to shape the optimization of visual recognition processes.

  11. Algebraic and adaptive learning in neural control systems

    Science.gov (United States)

    Ferrari, Silvia

    A systematic approach is developed for designing adaptive and reconfigurable nonlinear control systems that are applicable to plants modeled by ordinary differential equations. The nonlinear controller comprising a network of neural networks is taught using a two-phase learning procedure realized through novel techniques for initialization, on-line training, and adaptive critic design. A critical observation is that the gradients of the functions defined by the neural networks must equal corresponding linear gain matrices at chosen operating points. On-line training is based on a dual heuristic adaptive critic architecture that improves control for large, coupled motions by accounting for actual plant dynamics and nonlinear effects. An action network computes the optimal control law; a critic network predicts the derivative of the cost-to-go with respect to the state. Both networks are algebraically initialized based on prior knowledge of satisfactory pointwise linear controllers and continue to adapt on line during full-scale simulations of the plant. On-line training takes place sequentially over discrete periods of time and involves several numerical procedures. A backpropagating algorithm called Resilient Backpropagation is modified and successfully implemented to meet these objectives, without excessive computational expense. This adaptive controller is as conservative as the linear designs and as effective as a global nonlinear controller. The method is successfully implemented for the full-envelope control of a six-degree-of-freedom aircraft simulation. The results show that the on-line adaptation brings about improved performance with respect to the initialization phase during aircraft maneuvers that involve large-angle and coupled dynamics, and parameter variations.

  12. Short-term plasticity as a neural mechanism supporting memory and attentional functions

    OpenAIRE

    Jääskeläinen, Iiro P.; Ahveninen, Jyrki; Andermann, Mark L.; Belliveau, John W.; Raij, Tommi; Sams, Mikko

    2011-01-01

    Based on behavioral studies, several relatively distinct perceptual and cognitive functions have been defined in cognitive psychology such as sensory memory, short-term memory, and selective attention. Here, we review evidence suggesting that some of these functions may be supported by shared underlying neuronal mechanisms. Specifically, we present, based on an integrative review of the literature, a hypothetical model wherein short-term plasticity, in the form of transient center-excitatory ...

  13. Cerebellar plasticity and motor learning deficits in a copy-number variation mouse model of autism.

    Science.gov (United States)

    Piochon, Claire; Kloth, Alexander D; Grasselli, Giorgio; Titley, Heather K; Nakayama, Hisako; Hashimoto, Kouichi; Wan, Vivian; Simmons, Dana H; Eissa, Tahra; Nakatani, Jin; Cherskov, Adriana; Miyazaki, Taisuke; Watanabe, Masahiko; Takumi, Toru; Kano, Masanobu; Wang, Samuel S-H; Hansel, Christian

    2014-11-24

    A common feature of autism spectrum disorder (ASD) is the impairment of motor control and learning, occurring in a majority of children with autism, consistent with perturbation in cerebellar function. Here we report alterations in motor behaviour and cerebellar synaptic plasticity in a mouse model (patDp/+) for the human 15q11-13 duplication, one of the most frequently observed genetic aberrations in autism. These mice show ASD-resembling social behaviour deficits. We find that in patDp/+ mice delay eyeblink conditioning--a form of cerebellum-dependent motor learning--is impaired, and observe deregulation of a putative cellular mechanism for motor learning, long-term depression (LTD) at parallel fibre-Purkinje cell synapses. Moreover, developmental elimination of surplus climbing fibres--a model for activity-dependent synaptic pruning--is impaired. These findings point to deficits in synaptic plasticity and pruning as potential causes for motor problems and abnormal circuit development in autism.

  14. Neuromodulated Spike-Timing-Dependent Plasticity and Theory of Three-Factor Learning Rules

    Directory of Open Access Journals (Sweden)

    Wulfram eGerstner

    2016-01-01

    Full Text Available Classical Hebbian learning puts the emphasis on joint pre- and postsynaptic activity, but neglects the potential role of neuromodulators. Since neuromodulators convey information about novelty or reward, the influence of neuromodulatorson synaptic plasticity is useful not just for action learning in classical conditioning, but also to decide 'when' to create new memories in response to a flow of sensory stimuli.In this review, we focus on timing requirements for pre- and postsynaptic activity in conjunction with one or several phasic neuromodulatory signals. While the emphasis of the text is on conceptual models and mathematical theories, we also discusssome experimental evidence for neuromodulation of Spike-Timing-Dependent Plasticity.We highlight the importance of synaptic mechanisms in bridging the temporal gap between sensory stimulation and neuromodulatory signals, and develop a framework for a class of neo-Hebbian three-factor learning rules that depend on presynaptic activity, postsynaptic variables as well as the influence of neuromodulators.

  15. Neural Plasticity following Abacus Training in Humans: A Review and Future Directions

    Directory of Open Access Journals (Sweden)

    Yongxin Li

    2016-01-01

    Full Text Available The human brain has an enormous capacity to adapt to a broad variety of environmental demands. Previous studies in the field of abacus training have shown that this training can induce specific changes in the brain. However, the neural mechanism underlying these changes remains elusive. Here, we reviewed the behavioral and imaging findings of comparisons between abacus experts and average control subjects and focused on changes in activation patterns and changes in brain structure. Finally, we noted the limitations and the future directions of this field. We concluded that although current studies have provided us with information about the mechanisms of abacus training, more research on abacus training is needed to understand its neural impact.

  16. Learning to see again: biological constraints on cortical plasticity and the implications for sight restoration technologies

    Science.gov (United States)

    Beyeler, Michael; Rokem, Ariel; Boynton, Geoffrey M.; Fine, Ione

    2017-10-01

    The ‘bionic eye’—so long a dream of the future—is finally becoming a reality with retinal prostheses available to patients in both the US and Europe. However, clinical experience with these implants has made it apparent that the visual information provided by these devices differs substantially from normal sight. Consequently, the ability of patients to learn to make use of this abnormal retinal input plays a critical role in whether or not some functional vision is successfully regained. The goal of the present review is to summarize the vast basic science literature on developmental and adult cortical plasticity with an emphasis on how this literature might relate to the field of prosthetic vision. We begin with describing the distortion and information loss likely to be experienced by visual prosthesis users. We then define cortical plasticity and perceptual learning, and describe what is known, and what is unknown, about visual plasticity across the hierarchy of brain regions involved in visual processing, and across different stages of life. We close by discussing what is known about brain plasticity in sight restoration patients and discuss biological mechanisms that might eventually be harnessed to improve visual learning in these patients.

  17. Spatiotemporal discrimination in neural networks with short-term synaptic plasticity

    Science.gov (United States)

    Shlaer, Benjamin; Miller, Paul

    2015-03-01

    Cells in recurrently connected neural networks exhibit bistability, which allows for stimulus information to persist in a circuit even after stimulus offset, i.e. short-term memory. However, such a system does not have enough hysteresis to encode temporal information about the stimuli. The biophysically described phenomenon of synaptic depression decreases synaptic transmission strengths due to increased presynaptic activity. This short-term reduction in synaptic strengths can destabilize attractor states in excitatory recurrent neural networks, causing the network to move along stimulus dependent dynamical trajectories. Such a network can successfully separate amplitudes and durations of stimuli from the number of successive stimuli. Stimulus number, duration and intensity encoding in randomly connected attractor networks with synaptic depression. Front. Comput. Neurosci. 7:59., and so provides a strong candidate network for the encoding of spatiotemporal information. Here we explicitly demonstrate the capability of a recurrent neural network with short-term synaptic depression to discriminate between the temporal sequences in which spatial stimuli are presented.

  18. Neural markers of neuropathic pain associated with maladaptive plasticity in spinal cord injury.

    Science.gov (United States)

    Pascoal-Faria, Paula; Yalcin, Nilufer; Fregni, Felipe

    2015-04-01

    Given the potential use of neural markers for the development of novel treatments in spinal cord pain, we aimed to characterize the most effective neural markers of neuropathic pain following spinal cord injury (SCI). A systematic PubMed review was conducted, compiling studies that were published prior to April, 2014 that examined neural markers associated with neuropathic pain after SCI using electrophysiological and neuroimaging techniques. We identified 6 studies: Four using electroencephalogram (EEG); 1 using magnetic resonance imaging (MRI) and FDG-PET (positron emission tomography); and 1 using MR spectroscopy. The EEG recordings suggested a reduction in alpha EEG peak frequency activity in the frontal regions of SCI patients with neuropathic pain. The MRI scans showed volume loss, primarily in the gray matter of the left dorsolateral prefrontal cortex, and by FDG-PET, hypometabolism in the medial prefrontal cortex was observed in SCI patients with neuropathic pain compared with healthy subjects. In the MR spectroscopy findings, the presence of pain was associated with changes in the prefrontal cortex and anterior cingulate cortex. When analyzed together, the results of these studies seem to point out to a common marker of pain in SCI characterized by decreased cortical activity in frontal areas and possibly increased subcortical activity. These results may contribute to planning further mechanistic studies as to better understand the mechanisms by which neuropathic pain is modulated in patients with SCI as well as clinical studies investigating best responders of treatment. © 2014 World Institute of Pain.

  19. Supramodal processing optimizes visual perceptual learning and plasticity.

    Science.gov (United States)

    Zilber, Nicolas; Ciuciu, Philippe; Gramfort, Alexandre; Azizi, Leila; van Wassenhove, Virginie

    2014-06-01

    Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory

  20. Hybrid Spintronic-CMOS Spiking Neural Network with On-Chip Learning: Devices, Circuits, and Systems

    Science.gov (United States)

    Sengupta, Abhronil; Banerjee, Aparajita; Roy, Kaushik

    2016-12-01

    Over the past decade, spiking neural networks (SNNs) have emerged as one of the popular architectures to emulate the brain. In SNNs, information is temporally encoded and communication between neurons is accomplished by means of spikes. In such networks, spike-timing-dependent plasticity mechanisms require the online programing of synapses based on the temporal information of spikes transmitted by spiking neurons. In this work, we propose a spintronic synapse with decoupled spike-transmission and programing-current paths. The spintronic synapse consists of a ferromagnet-heavy-metal heterostructure where the programing current through the heavy metal generates spin-orbit torque to modulate the device conductance. Low programing energy and fast programing times demonstrate the efficacy of the proposed device as a nanoelectronic synapse. We perform a simulation study based on an experimentally benchmarked device-simulation framework to demonstrate the interfacing of such spintronic synapses with CMOS neurons and learning circuits operating in the transistor subthreshold region to form a network of spiking neurons that can be utilized for pattern-recognition problems.

  1. Neural mechanisms of reinforcement learning in unmedicated patients with major depressive disorder.

    Science.gov (United States)

    Rothkirch, Marcus; Tonn, Jonas; Köhler, Stephan; Sterzer, Philipp

    2017-04-01

    According to current concepts, major depressive disorder is strongly related to dysfunctional neural processing of motivational information, entailing impairments in reinforcement learning. While computational modelling can reveal the precise nature of neural learning signals, it has not been used to study learning-related neural dysfunctions in unmedicated patients with major depressive disorder so far. We thus aimed at comparing the neural coding of reward and punishment prediction errors, representing indicators of neural learning-related processes, between unmedicated patients with major depressive disorder and healthy participants. To this end, a group of unmedicated patients with major depressive disorder (n = 28) and a group of age- and sex-matched healthy control participants (n = 30) completed an instrumental learning task involving monetary gains and losses during functional magnetic resonance imaging. The two groups did not differ in their learning performance. Patients and control participants showed the same level of prediction error-related activity in the ventral striatum and the anterior insula. In contrast, neural coding of reward prediction errors in the medial orbitofrontal cortex was reduced in patients. Moreover, neural reward prediction error signals in the medial orbitofrontal cortex and ventral striatum showed negative correlations with anhedonia severity. Using a standard instrumental learning paradigm we found no evidence for an overall impairment of reinforcement learning in medication-free patients with major depressive disorder. Importantly, however, the attenuated neural coding of reward in the medial orbitofrontal cortex and the relation between anhedonia and reduced reward prediction error-signalling in the medial orbitofrontal cortex and ventral striatum likely reflect an impairment in experiencing pleasure from rewarding events as a key mechanism of anhedonia in major depressive disorder. © The Author (2017). Published by Oxford

  2. Adaptive Learning Rule for Hardware-based Deep Neural Networks Using Electronic Synapse Devices

    OpenAIRE

    Lim, Suhwan; Bae, Jong-Ho; Eum, Jai-Ho; Lee, Sungtae; Kim, Chul-Heung; Kwon, Dongseok; Park, Byung-Gook; Lee, Jong-Ho

    2017-01-01

    In this paper, we propose a learning rule based on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network (HW-DNN) using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron net...

  3. Plasticity in learning causes immediate and trans-generational changes in allocation of resources.

    Science.gov (United States)

    Snell-Rood, Emilie C; Davidowitz, Goggy; Papaj, Daniel R

    2013-08-01

    Plasticity in the development and expression of behavior may allow organisms to cope with novel and rapidly changing environments. However, plasticity itself may depend on the developmental experiences of an individual. For instance, individuals reared in complex, enriched environments develop enhanced cognitive abilities as a result of increased synaptic connections and neurogenesis. This suggests that costs associated with behavioral plasticity-in particular, increased investment in "self" at the expense of reproduction-may also be flexible. Using butterflies as a system, this work tests whether allocation of resources changes as a result of experiences in "difficult" environments that require more investment in learning. We contrast allocation of resources among butterflies with experience in environments that vary in the need for learning. Butterflies with experience searching for novel (i.e., red) hosts, or searching in complex non-host environments, allocate more resources (protein and carbohydrate reserves) to their own flight muscle. In addition, butterflies with experience in these more difficult environments allocate more resources per individual offspring (i.e., egg size and/or lipid reserves). This results in a mother's experience having significant effects on the growth of her offspring (i.e., dry mass and wing length). A separate study showed this re-allocation of resources comes at the expense of lifetime fecundity. These results suggest that investment in learning, and associated changes in life history, can be adjusted depending on an individual's current need, and their offspring's future needs, for learning.

  4. RM-SORN: a reward-modulated self-organizing recurrent neural network.

    Science.gov (United States)

    Aswolinskiy, Witali; Pipa, Gordon

    2015-01-01

    Neural plasticity plays an important role in learning and memory. Reward-modulation of plasticity offers an explanation for the ability of the brain to adapt its neural activity to achieve a rewarded goal. Here, we define a neural network model that learns through the interaction of Intrinsic Plasticity (IP) and reward-modulated Spike-Timing-Dependent Plasticity (STDP). IP enables the network to explore possible output sequences and STDP, modulated by reward, reinforces the creation of the rewarded output sequences. The model is tested on tasks for prediction, recall, non-linear computation, pattern recognition, and sequence generation. It achieves performance comparable to networks trained with supervised learning, while using simple, biologically motivated plasticity rules, and rewarding strategies. The results confirm the importance of investigating the interaction of several plasticity rules in the context of reward-modulated learning and whether reward-modulated self-organization can explain the amazing capabilities of the brain.

  5. Learning free energy landscapes using artificial neural networks.

    Science.gov (United States)

    Sidky, Hythem; Whitmer, Jonathan K

    2018-03-14

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  6. Learning free energy landscapes using artificial neural networks

    Science.gov (United States)

    Sidky, Hythem; Whitmer, Jonathan K.

    2018-03-01

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  7. Synaptic Plasticity and Learning Behaviors Mimicked in Single Inorganic Synapses of Pt/HfOx/ZnOx/TiN Memristive System

    Science.gov (United States)

    Wang, Lai-Guo; Zhang, Wei; Chen, Yan; Cao, Yan-Qiang; Li, Ai-Dong; Wu, Di

    2017-01-01

    In this work, a kind of new memristor with the simple structure of Pt/HfOx/ZnOx/TiN was fabricated completely via combination of thermal-atomic layer deposition (TALD) and plasma-enhanced ALD (PEALD). The synaptic plasticity and learning behaviors of Pt/HfOx/ZnOx/TiN memristive system have been investigated deeply. Multilevel resistance states are obtained by varying the programming voltage amplitudes during the pulse cycling. The device conductance can be continuously increased or decreased from cycle to cycle with better endurance characteristics up to about 3 × 103 cycles. Several essential synaptic functions are simultaneously achieved in such a single double-layer of HfOx/ZnOx device, including nonlinear transmission properties, such as long-term plasticity (LTP), short-term plasticity (STP), and spike-timing-dependent plasticity. The transformation from STP to LTP induced by repetitive pulse stimulation is confirmed in Pt/HfOx/ZnOx/TiN memristive device. Above all, simple structure of Pt/HfOx/ZnOx/TiN by ALD technique is a kind of promising memristor device for applications in artificial neural network.

  8. Plasticity of cortical excitatory-inhibitory balance.

    Science.gov (United States)

    Froemke, Robert C

    2015-07-08

    Synapses are highly plastic and are modified by changes in patterns of neural activity or sensory experience. Plasticity of cortical excitatory synapses is thought to be important for learning and memory, leading to alterations in sensory representations and cognitive maps. However, these changes must be coordinated across other synapses within local circuits to preserve neural coding schemes and the organization of excitatory and inhibitory inputs, i.e., excitatory-inhibitory balance. Recent studies indicate that inhibitory synapses are also plastic and are controlled directly by a large number of neuromodulators, particularly during episodes of learning. Many modulators transiently alter excitatory-inhibitory balance by decreasing inhibition, and thus disinhibition has emerged as a major mechanism by which neuromodulation might enable long-term synaptic modifications naturally. This review examines the relationships between neuromodulation and synaptic plasticity, focusing on the induction of long-term changes that collectively enhance cortical excitatory-inhibitory balance for improving perception and behavior.

  9. Subcortical plasticity following perceptual learning in a pitch discrimination task.

    Science.gov (United States)

    Carcagno, Samuele; Plack, Christopher J

    2011-02-01

    Practice can lead to dramatic improvements in the discrimination of auditory stimuli. In this study, we investigated changes of the frequency-following response (FFR), a subcortical component of the auditory evoked potentials, after a period of pitch discrimination training. Twenty-seven adult listeners were trained for 10 h on a pitch discrimination task using one of three different complex tone stimuli. One had a static pitch contour, one had a rising pitch contour, and one had a falling pitch contour. Behavioral measures of pitch discrimination and FFRs for all the stimuli were measured before and after the training phase for these participants, as well as for an untrained control group (n = 12). Trained participants showed significant improvements in pitch discrimination compared to the control group for all three trained stimuli. These improvements were partly specific for stimuli with the same pitch modulation (dynamic vs. static) and with the same pitch trajectory (rising vs. falling) as the trained stimulus. Also, the robustness of FFR neural phase locking to the sound envelope increased significantly more in trained participants compared to the control group for the static and rising contour, but not for the falling contour. Changes in FFR strength were partly specific for stimuli with the same pitch modulation (dynamic vs. static) of the trained stimulus. Changes in FFR strength, however, were not specific for stimuli with the same pitch trajectory (rising vs. falling) as the trained stimulus. These findings indicate that even relatively low-level processes in the mature auditory system are subject to experience-related change.

  10. Neural and Molecular Features on Charcot-Marie-Tooth Disease Plasticity and Therapy

    Directory of Open Access Journals (Sweden)

    Paula Juárez

    2012-01-01

    Full Text Available In the peripheral nervous system disorders plasticity is related to changes on the axon and Schwann cell biology, and the synaptic formations and connections, which could be also a focus for therapeutic research. Charcot-Marie-Tooth disease (CMT represents a large group of inherited peripheral neuropathies that involve mainly both motor and sensory nerves and induce muscular atrophy and weakness. Genetic analysis has identified several pathways and molecular mechanisms involving myelin structure and proper nerve myelination, transcriptional regulation, protein turnover, vesicle trafficking, axonal transport and mitochondrial dynamics. These pathogenic mechanisms affect the continuous signaling and dialogue between the Schwann cell and the axon, having as final result the loss of myelin and nerve maintenance; however, some late onset axonal CMT neuropathies are a consequence of Schwann cell specific changes not affecting myelin. Comprehension of molecular pathways involved in Schwann cell-axonal interactions is likely not only to increase the understanding of nerve biology but also to identify the molecular targets and cell pathways to design novel therapeutic approaches for inherited neuropathies but also for most common peripheral neuropathies. These approaches should improve the plasticity of the synaptic connections at the neuromuscular junction and regenerate cell viability based on improving myelin and axon interaction.

  11. Can adult neural stem cells create new brains? Plasticity in the adult mammalian neurogenic niches: realities and expectations in the era of regenerative biology.

    Science.gov (United States)

    Kazanis, Ilias

    2012-02-01

    Since the first experimental reports showing the persistence of neurogenic activity in the adult mammalian brain, this field of neurosciences has expanded significantly. It is now widely accepted that neural stem and precursor cells survive during adulthood and are able to respond to various endogenous and exogenous cues by altering their proliferation and differentiation activity. Nevertheless, the pathway to therapeutic applications still seems to be long. This review attempts to summarize and revisit the available data regarding the plasticity potential of adult neural stem cells and of their normal microenvironment, the neurogenic niche. Recent data have demonstrated that adult neural stem cells retain a high level of pluripotency and that adult neurogenic systems can switch the balance between neurogenesis and gliogenesis and can generate a range of cell types with an efficiency that was not initially expected. Moreover, adult neural stem and precursor cells seem to be able to self-regulate their interaction with the microenvironment and even to contribute to its synthesis, altogether revealing a high level of plasticity potential. The next important step will be to elucidate the factors that limit this plasticity in vivo, and such a restrictive role for the microenvironment is discussed in more details.

  12. How the Blind “See” Braille and the Deaf “Hear” Sign: Lessons from fMRI on the Cross-Modal Plasticity, Integration, and Learning

    Directory of Open Access Journals (Sweden)

    Norihiro Sadato

    2011-10-01

    Full Text Available What does the visual cortex of the blind do during Braille reading? This process involves converting simple tactile information into meaningful patterns that have lexical and semantic properties. The perceptual processing of Braille might be mediated by the somatosensory system, whereas visual letter identity is accomplished within the visual system in sighted people. Recent advances in functional neuroimaging techniques have enabled exploration of the neural substrates of Braille reading (Sadato et al. 1996, 1998, 2002, Cohen et al. 1997, 1999. The primary visual cortex of early-onset blind subjects is functionally relevant to Braille reading, suggesting that the brain shows remarkable plasticity that potentially permits the additional processing of tactile information in the visual cortical areas. Similar cross-modal plasticity is observed by the auditory deprivation: Sign language activates the auditory cortex of deaf subjects (Neville et al. 1999, Nishimura et al. 1999, Sadato et al. 2004. Cross-modal activation can be seen in the sighted and hearing subjects. For example, the tactile shape discrimination of two dimensional (2D shapes (Mah-Jong tiles activated the visual cortex by expert players (Saito et al. 2006, and the lip-reading (visual phonetics (Sadato et al. 2004 or key touch reading by pianists (Hasegawa et al. 2004 activates the auditory cortex of hearing subjects. Thus the cross-modal plasticity by sensory deprivation and cross-modal integration through the learning may share their neural substrates. To clarify the distribution of the neural substrates and their dynamics during cross-modal association learning within several hours, we conducted audio-visual paired association learning of delayed-matching-to-sample type tasks (Tanabe et al. 2005. Each trial consisted of the successive presentation of a pair of stimuli. Subjects had to find pre-defined audio-visual or visuo-visual pairs in a trial and error manner with feedback in

  13. CCR5 is a suppressor for cortical plasticity and hippocampal learning and memory.

    Science.gov (United States)

    Zhou, Miou; Greenhill, Stuart; Huang, Shan; Silva, Tawnie K; Sano, Yoshitake; Wu, Shumin; Cai, Ying; Nagaoka, Yoshiko; Sehgal, Megha; Cai, Denise J; Lee, Yong-Seok; Fox, Kevin; Silva, Alcino J

    2016-12-20

    Although the role of CCR5 in immunity and in HIV infection has been studied widely, its role in neuronal plasticity, learning and memory is not understood. Here, we report that decreasing the function of CCR5 increases MAPK/CREB signaling, long-term potentiation (LTP), and hippocampus-dependent memory in mice, while neuronal CCR5 overexpression caused memory deficits. Decreasing CCR5 function in mouse barrel cortex also resulted in enhanced spike timing dependent plasticity and consequently, dramatically accelerated experience-dependent plasticity. These results suggest that CCR5 is a powerful suppressor for plasticity and memory, and CCR5 over-activation by viral proteins may contribute to HIV-associated cognitive deficits. Consistent with this hypothesis, the HIV V3 peptide caused LTP, signaling and memory deficits that were prevented by Ccr5 knockout or knockdown. Overall, our results demonstrate that CCR5 plays an important role in neuroplasticity, learning and memory, and indicate that CCR5 has a role in the cognitive deficits caused by HIV.

  14. In vivo reactive neural plasticity investigation by means of correlative two photon: electron microscopy

    Science.gov (United States)

    Allegra Mascaro, A. L.; Cesare, P.; Sacconi, L.; Grasselli, G.; Mandolesi, G.; Maco, B.; Knott, G.; Huang, L.; De Paola, V.; Strata, P.; Pavone, F. S.

    2013-02-01

    In the adult nervous system, different populations of neurons correspond to different regenerative behavior. Although previous works showed that olivocerebellar fibers are capable of axonal regeneration in a suitable environment as a response to injury1, we have hitherto no details about the real dynamics of fiber regeneration. We set up a model of singularly axotomized climbing fibers (CF) to investigate their reparative properties in the adult central nervous system (CNS) in vivo. Time lapse two-photon imaging has been combined to laser nanosurgery2, 3 to define a temporal pattern of the degenerative event and to follow the structural rearrangement after injury. To characterize the damage and to elucidate the possible formation of new synaptic contacts on the sprouted branches of the lesioned CF, we combined two-photon in vivo imaging with block face scanning electron microscopy (FIB-SEM). Here we describe the approach followed to characterize the reactive plasticity after injury.

  15. Supervised Learning Using Spike-Timing-Dependent Plasticity of Memristive Synapses.

    Science.gov (United States)

    Nishitani, Yu; Kaneko, Yukihiro; Ueda, Michihito

    2015-12-01

    We propose a supervised learning model that enables error backpropagation for spiking neural network hardware. The method is modeled by modifying an existing model to suit the hardware implementation. An example of a network circuit for the model is also presented. In this circuit, a three-terminal ferroelectric memristor (3T-FeMEM), which is a field-effect transistor with a gate insulator composed of ferroelectric materials, is used as an electric synapse device to store the analog synaptic weight. Our model can be implemented by reflecting the network error to the write voltage of the 3T-FeMEMs and introducing a spike-timing-dependent learning function to the device. An XOR problem was successfully demonstrated as a benchmark learning by numerical simulations using the circuit properties to estimate the learning performance. In principle, the learning time per step of this supervised learning model and the circuit is independent of the number of neurons in each layer, promising a high-speed and low-power calculation in large-scale neural networks.

  16. Parameter diagnostics of phases and phase transition learning by neural networks

    Science.gov (United States)

    Suchsland, Philippe; Wessel, Stefan

    2018-05-01

    We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.

  17. Differential roles of nonsynaptic and synaptic plasticity in operant reward learning-induced compulsive behavior.

    Science.gov (United States)

    Sieling, Fred; Bédécarrats, Alexis; Simmers, John; Prinz, Astrid A; Nargeot, Romuald

    2014-05-05

    Rewarding stimuli in associative learning can transform the irregularly and infrequently generated motor patterns underlying motivated behaviors into output for accelerated and stereotyped repetitive action. This transition to compulsive behavioral expression is associated with modified synaptic and membrane properties of central neurons, but establishing the causal relationships between cellular plasticity and motor adaptation has remained a challenge. We found previously that changes in the intrinsic excitability and electrical synapses of identified neurons in Aplysia's central pattern-generating network for feeding are correlated with a switch to compulsive-like motor output expression induced by in vivo operant conditioning. Here, we used specific computer-simulated ionic currents in vitro to selectively replicate or suppress the membrane and synaptic plasticity resulting from this learning. In naive in vitro preparations, such experimental manipulation of neuronal membrane properties alone increased the frequency but not the regularity of feeding motor output found in preparations from operantly trained animals. On the other hand, changes in synaptic strength alone switched the regularity but not the frequency of feeding output from naive to trained states. However, simultaneously imposed changes in both membrane and synaptic properties reproduced both major aspects of the motor plasticity. Conversely, in preparations from trained animals, experimental suppression of the membrane and synaptic plasticity abolished the increase in frequency and regularity of the learned motor output expression. These data establish direct causality for the contributions of distinct synaptic and nonsynaptic adaptive processes to complementary facets of a compulsive behavior resulting from operant reward learning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Effects of transcranial direct current stimulation on hemichannel pannexin-1 and neural plasticity in rat model of cerebral infarction.

    Science.gov (United States)

    Jiang, T; Xu, R X; Zhang, A W; Di, W; Xiao, Z J; Miao, J Y; Luo, N; Fang, Y N

    2012-12-13

    The aim of this study was to investigate the effects of transcranial direct current stimulation (TDCS) on hemichannel pannexin-1 (PX1) in cortical neurons and neural plasticity, and explore the optimal time window of TDCS therapy after stroke. Adult male Sprague-Dawley rats (n=90) were randomly assigned to sham operation, middle cerebral artery occlusion (MCAO), and TDCS groups, and underwent sham operation, unilateral middle cerebral artery (MCA) electrocoagulation, and unilateral MCA electrocoagulation plus TDCS (daily anodal and cathodal 10 Hz, 0.1 mA TDCS for 30 min beginning day 1 after stroke), respectively. Motor function was assessed using the beam walking test (BWT), and density of dendritic spines (DS) and PX1 mRNA expression were compared among groups on days 3, 7, and 14 after stroke. Effects of PX1 blockage on DS in hippocampal neurons after hypoxia-ischemia were observed. TDCS significantly improved motor function on days 7 and 14 after stroke as indicated by reduced BWT scores compared with the MCAO group. The density of DS was decreased after stroke; the TDCS group had increased DS density compared with the MCAO group on days 3, 7, and 14 (all P<0.0001). Cerebral infarction induced increased PX1 mRNA expression on days 3, 7, and 14 (P<0.0001), and the peak PX1 mRNA expression was observed on day 7. TDCS did not decrease the up-regulated PX1 mRNA expression after stroke on day 3, but did reduce the increased post-stroke PX1 mRNA expression on days 7 and 14 (P<0.0001). TDCS increased the DS density after stroke, indicating that it may promote neural plasticity after stroke. TDCS intervention from day 7 to day 14 after stroke demonstrated motor function improvement and can down-regulate the elevated PX1 mRNA expression after stroke. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. A realistic neural mass model of the cortex with laminar-specific connections and synaptic plasticity - evaluation with auditory habituation.

    Directory of Open Access Journals (Sweden)

    Peng Wang

    Full Text Available In this work we propose a biologically realistic local cortical circuit model (LCCM, based on neural masses, that incorporates important aspects of the functional organization of the brain that have not been covered by previous models: (1 activity dependent plasticity of excitatory synaptic couplings via depleting and recycling of neurotransmitters and (2 realistic inter-laminar dynamics via laminar-specific distribution of and connections between neural populations. The potential of the LCCM was demonstrated by accounting for the process of auditory habituation. The model parameters were specified using Bayesian inference. It was found that: (1 besides the major serial excitatory information pathway (layer 4 to layer 2/3 to layer 5/6, there exists a parallel "short-cut" pathway (layer 4 to layer 5/6, (2 the excitatory signal flow from the pyramidal cells to the inhibitory interneurons seems to be more intra-laminar while, in contrast, the inhibitory signal flow from inhibitory interneurons to the pyramidal cells seems to be both intra- and inter-laminar, and (3 the habituation rates of the connections are unsymmetrical: forward connections (from layer 4 to layer 2/3 are more strongly habituated than backward connections (from Layer 5/6 to layer 4. Our evaluation demonstrates that the novel features of the LCCM are of crucial importance for mechanistic explanations of brain function. The incorporation of these features into a mass model makes them applicable to modeling based on macroscopic data (like EEG or MEG, which are usually available in human experiments. Our LCCM is therefore a valuable building block for future realistic models of human cognitive function.

  20. Big Data and Machine Learning in Plastic Surgery: A New Frontier in Surgical Innovation.

    Science.gov (United States)

    Kanevsky, Jonathan; Corban, Jason; Gaster, Richard; Kanevsky, Ari; Lin, Samuel; Gilardino, Mirko

    2016-05-01

    Medical decision-making is increasingly based on quantifiable data. From the moment patients come into contact with the health care system, their entire medical history is recorded electronically. Whether a patient is in the operating room or on the hospital ward, technological advancement has facilitated the expedient and reliable measurement of clinically relevant health metrics, all in an effort to guide care and ensure the best possible clinical outcomes. However, as the volume and complexity of biomedical data grow, it becomes challenging to effectively process "big data" using conventional techniques. Physicians and scientists must be prepared to look beyond classic methods of data processing to extract clinically relevant information. The purpose of this article is to introduce the modern plastic surgeon to machine learning and computational interpretation of large data sets. What is machine learning? Machine learning, a subfield of artificial intelligence, can address clinically relevant problems in several domains of plastic surgery, including burn surgery; microsurgery; and craniofacial, peripheral nerve, and aesthetic surgery. This article provides a brief introduction to current research and suggests future projects that will allow plastic surgeons to explore this new frontier of surgical science.

  1. Supervised spike-timing-dependent plasticity: a spatiotemporal neuronal learning rule for function approximation and decisions.

    Science.gov (United States)

    Franosch, Jan-Moritz P; Urban, Sebastian; van Hemmen, J Leo

    2013-12-01

    How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as "supervisor." Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.

  2. Learning second language vocabulary: neural dissociation of situation-based learning and text-based learning.

    Science.gov (United States)

    Jeong, Hyeonjeong; Sugiura, Motoaki; Sassa, Yuko; Wakusawa, Keisuke; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta

    2010-04-01

    Second language (L2) acquisition necessitates learning and retrieving new words in different modes. In this study, we attempted to investigate the cortical representation of an L2 vocabulary acquired in different learning modes and in cross-modal transfer between learning and retrieval. Healthy participants learned new L2 words either by written translations (text-based learning) or in real-life situations (situation-based learning). Brain activity was then measured during subsequent retrieval of these words. The right supramarginal gyrus and left middle frontal gyrus were involved in situation-based learning and text-based learning, respectively, whereas the left inferior frontal gyrus was activated when learners used L2 knowledge in a mode different from the learning mode. Our findings indicate that the brain regions that mediate L2 memory differ according to how L2 words are learned and used. Copyright 2009 Elsevier Inc. All rights reserved.

  3. Improved Discriminability of Spatiotemporal Neural Patterns in Rat Motor Cortical Areas as Directional Choice Learning Progresses

    Directory of Open Access Journals (Sweden)

    Hongwei eMao

    2015-03-01

    Full Text Available Animals learn to choose a proper action among alternatives to improve their odds of success in food foraging and other activities critical for survival. Through trial-and-error, they learn correct associations between their choices and external stimuli. While a neural network that underlies such learning process has been identified at a high level, it is still unclear how individual neurons and a neural ensemble adapt as learning progresses. In this study, we monitored the activity of single units in the rat medial and lateral agranular (AGm and AGl, respectively areas as rats learned to make a left or right side lever press in response to a left or right side light cue. We noticed that rat movement parameters during the performance of the directional choice task quickly became stereotyped during the first 2-3 days or sessions. But learning the directional choice problem took weeks to occur. Accompanying rats’ behavioral performance adaptation, we observed neural modulation by directional choice in recorded single units. Our analysis shows that ensemble mean firing rates in the cue-on period did not change significantly as learning progressed, and the ensemble mean rate difference between left and right side choices did not show a clear trend of change either. However, the spatiotemporal firing patterns of the neural ensemble exhibited improved discriminability between the two directional choices through learning. These results suggest a spatiotemporal neural coding scheme in a motor cortical neural ensemble that may be responsible for and contributing to learning the directional choice task.

  4. Metaplasticity within the spinal cord: Evidence brain-derived neurotrophic factor (BDNF), tumor necrosis factor (TNF), and alterations in GABA function (ionic plasticity) modulate pain and the capacity to learn.

    Science.gov (United States)

    Grau, James W; Huang, Yung-Jen

    2018-04-07

    Evidence is reviewed that behavioral training and neural injury can engage metaplastic processes that regulate adaptive potential. This issue is explored within a model system that examines how training affects the capacity to learn within the lower (lumbosacral) spinal cord. Response-contingent (controllable) stimulation applied caudal to a spinal transection induces a behavioral modification indicative of learning. This behavioral change is not observed in animals that receive stimulation in an uncontrollable manner. Exposure to uncontrollable stimulation also engages a process that disables spinal learning for 24-48 h. Controllable stimulation has the opposite effect; it engages a process that enables learning and prevents/reverses the learning deficit induced by uncontrollable stimulation. These observations suggest that a learning episode can impact the capacity to learn in future situations, providing an example of behavioral metaplasticity. The protective/restorative effect of controllable stimulation has been linked to an up-regulation of brain-derived neurotrophic factor (BDNF). The disruption of learning has been linked to the sensitization of pain (nociceptive) circuits, which is enabled by a reduction in GABA-dependent inhibition. After spinal cord injury (SCI), the co-transporter (KCC2) that regulates the outward flow of Cl - is down-regulated. This causes the intracellular concentration of Cl - to increase, reducing (and potentially reversing) the inward flow of Cl - through the GABA-A receptor. The shift in GABA function (ionic plasticity) increases neural excitability caudal to injury and sets the stage for nociceptive sensitization. The injury-induced shift in KCC2 is related to the loss of descending serotonergic (5HT) fibers that regulate plasticity within the spinal cord dorsal horn through the 5HT-1A receptor. Evidence is presented that these alterations in spinal plasticity impact pain in a brain-dependent task (place conditioning). The

  5. Neural prediction errors reveal a risk-sensitive reinforcement-learning process in the human brain.

    Science.gov (United States)

    Niv, Yael; Edlund, Jeffrey A; Dayan, Peter; O'Doherty, John P

    2012-01-11

    Humans and animals are exquisitely, though idiosyncratically, sensitive to risk or variance in the outcomes of their actions. Economic, psychological, and neural aspects of this are well studied when information about risk is provided explicitly. However, we must normally learn about outcomes from experience, through trial and error. Traditional models of such reinforcement learning focus on learning about the mean reward value of cues and ignore higher order moments such as variance. We used fMRI to test whether the neural correlates of human reinforcement learning are sensitive to experienced risk. Our analysis focused on anatomically delineated regions of a priori interest in the nucleus accumbens, where blood oxygenation level-dependent (BOLD) signals have been suggested as correlating with quantities derived from reinforcement learning. We first provide unbiased evidence that the raw BOLD signal in these regions corresponds closely to a reward prediction error. We then derive from this signal the learned values of cues that predict rewards of equal mean but different variance and show that these values are indeed modulated by experienced risk. Moreover, a close neurometric-psychometric coupling exists between the fluctuations of the experience-based evaluations of risky options that we measured neurally and the fluctuations in behavioral risk aversion. This suggests that risk sensitivity is integral to human learning, illuminating economic models of choice, neuroscientific models of affective learning, and the workings of the underlying neural mechanisms.

  6. Myosin light chain kinase regulates synaptic plasticity and fear learning in the lateral amygdala.

    Science.gov (United States)

    Lamprecht, R; Margulies, D S; Farb, C R; Hou, M; Johnson, L R; LeDoux, J E

    2006-01-01

    Learning and memory depend on signaling molecules that affect synaptic efficacy. The cytoskeleton has been implicated in regulating synaptic transmission but its role in learning and memory is poorly understood. Fear learning depends on plasticity in the lateral nucleus of the amygdala. We therefore examined whether the cytoskeletal-regulatory protein, myosin light chain kinase, might contribute to fear learning in the rat lateral amygdala. Microinjection of ML-7, a specific inhibitor of myosin light chain kinase, into the lateral nucleus of the amygdala before fear conditioning, but not immediately afterward, enhanced both short-term memory and long-term memory, suggesting that myosin light chain kinase is involved specifically in memory acquisition rather than in posttraining consolidation of memory. Myosin light chain kinase inhibitor had no effect on memory retrieval. Furthermore, ML-7 had no effect on behavior when the training stimuli were presented in a non-associative manner. Anatomical studies showed that myosin light chain kinase is present in cells throughout lateral nucleus of the amygdala and is localized to dendritic shafts and spines that are postsynaptic to the projections from the auditory thalamus to lateral nucleus of the amygdala, a pathway specifically implicated in fear learning. Inhibition of myosin light chain kinase enhanced long-term potentiation, a physiological model of learning, in the auditory thalamic pathway to the lateral nucleus of the amygdala. When ML-7 was applied without associative tetanic stimulation it had no effect on synaptic responses in lateral nucleus of the amygdala. Thus, myosin light chain kinase activity in lateral nucleus of the amygdala appears to normally suppress synaptic plasticity in the circuits underlying fear learning, suggesting that myosin light chain kinase may help prevent the acquisition of irrelevant fears. Impairment of this mechanism could contribute to pathological fear learning.

  7. Learning characteristics of a space-time neural network as a tether skiprope observer

    Science.gov (United States)

    Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles

    1993-01-01

    The Software Technology Laboratory at the Johnson Space Center is testing a Space Time Neural Network (STNN) for observing tether oscillations present during retrieval of a tethered satellite. Proper identification of tether oscillations, known as 'skiprope' motion, is vital to safe retrieval of the tethered satellite. Our studies indicate that STNN has certain learning characteristics that must be understood properly to utilize this type of neural network for the tethered satellite problem. We present our findings on the learning characteristics including a learning rate versus momentum performance table.

  8. Neural plasticity explored by correlative two-photon and electron/SPIM microscopy

    Science.gov (United States)

    Allegra Mascaro, A. L.; Silvestri, L.; Costantini, I.; Sacconi, L.; Maco, B.; Knott, G. W.; Pavone, F. S.

    2013-06-01

    Plasticity of the central nervous system is a complex process which involves the remodeling of neuronal processes and synaptic contacts. However, a single imaging technique can reveal only a small part of this complex machinery. To obtain a more complete view, complementary approaches should be combined. Two-photon fluorescence microscopy, combined with multi-photon laser nanosurgery, allow following the real-time dynamics of single neuronal processes in the cerebral cortex of living mice. The structural rearrangement elicited by this highly confined paradigm of injury can be imaged in vivo first, and then the same neuron could be retrieved ex-vivo and characterized in terms of ultrastructural features of the damaged neuronal branch by means of electron microscopy. Afterwards, we describe a method to integrate data from in vivo two-photon fluorescence imaging and ex vivo light sheet microscopy, based on the use of major blood vessels as reference chart. We show how the apical dendritic arbor of a single cortical pyramidal neuron imaged in living mice can be found in the large-scale brain reconstruction obtained with light sheet microscopy. Starting from its apical portion, the whole pyramidal neuron can then be segmented and located in the correct cortical layer. With the correlative approach presented here, researchers will be able to place in a three-dimensional anatomic context the neurons whose dynamics have been observed with high detail in vivo.

  9. Neural Plasticity and Memory: Is Memory Encoded in Hydrogen Bonding Patterns?

    Science.gov (United States)

    Amtul, Zareen; Rahman, Atta-Ur

    2016-02-01

    Current models of memory storage recognize posttranslational modification vital for short-term and mRNA translation for long-lasting information storage. However, at the molecular level things are quite vague. A comprehensive review of the molecular basis of short and long-lasting synaptic plasticity literature leads us to propose that the hydrogen bonding pattern at the molecular level may be a permissive, vital step of memory storage. Therefore, we propose that the pattern of hydrogen bonding network of biomolecules (glycoproteins and/or DNA template, for instance) at the synapse is the critical edifying mechanism essential for short- and long-term memories. A novel aspect of this model is that nonrandom impulsive (or unplanned) synaptic activity functions as a synchronized positive-feedback rehearsal mechanism by revising the configurations of the hydrogen bonding network by tweaking the earlier tailored hydrogen bonds. This process may also maintain the elasticity of the related synapses involved in memory storage, a characteristic needed for such networks to alter intricacy and revise endlessly. The primary purpose of this review is to stimulate the efforts to elaborate the mechanism of neuronal connectivity both at molecular and chemical levels. © The Author(s) 2014.

  10. The Role of Stress Regulation on Neural Plasticity in Pain Chronification.

    Science.gov (United States)

    Li, Xiaoyun; Hu, Li

    2016-01-01

    Pain, especially chronic pain, is one of the most common clinical symptoms and has been considered as a worldwide healthcare problem. The transition from acute to chronic pain is accompanied by a chain of alterations in physiology, pathology, and psychology. Increasing clinical studies and complementary animal models have elucidated effects of stress regulation on the pain chronification via investigating activations of the hypothalamic-pituitary-adrenal (HPA) axis and changes in some crucial brain regions, including the amygdala, prefrontal cortex, and hippocampus. Although individuals suffer from acute pain benefit from such physiological alterations, chronic pain is commonly associated with maladaptive responses, like the HPA dysfunction and abnormal brain plasticity. However, the causal relationship among pain chronification, stress regulation, and brain alterations is rarely discussed. To call for more attention on this issue, we review recent findings obtained from clinical populations and animal models, propose an integrated stress model of pain chronification based on the existing models in perspectives of environmental influences and genetic predispositions, and discuss the significance of investigating the role of stress regulation on brain alteration in pain chronification for various clinical applications.

  11. Plastic reorganization of neural systems for perception of others in the congenitally blind.

    Science.gov (United States)

    Fairhall, S L; Porter, K B; Bellucci, C; Mazzetti, M; Cipolli, C; Gobbini, M I

    2017-09-01

    Recent evidence suggests that the function of the core system for face perception might extend beyond visual face-perception to a broader role in person perception. To critically test the broader role of core face-system in person perception, we examined the role of the core system during the perception of others in 7 congenitally blind individuals and 15 sighted subjects by measuring their neural responses using fMRI while they listened to voices and performed identity and emotion recognition tasks. We hypothesised that in people who have had no visual experience of faces, core face-system areas may assume a role in the perception of others via voices. Results showed that emotions conveyed by voices can be decoded in homologues of the core face system only in the blind. Moreover, there was a specific enhancement of response to verbal as compared to non-verbal stimuli in bilateral fusiform face areas and the right posterior superior temporal sulcus showing that the core system also assumes some language-related functions in the blind. These results indicate that, in individuals with no history of visual experience, areas of the core system for face perception may assume a role in aspects of voice perception that are relevant to social cognition and perception of others' emotions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. Granulocyte-colony stimulating factor controls neural and behavioral plasticity in response to cocaine.

    Science.gov (United States)

    Calipari, Erin S; Godino, Arthur; Peck, Emily G; Salery, Marine; Mervosh, Nicholas L; Landry, Joseph A; Russo, Scott J; Hurd, Yasmin L; Nestler, Eric J; Kiraly, Drew D

    2018-01-16

    Cocaine addiction is characterized by dysfunction in reward-related brain circuits, leading to maladaptive motivation to seek and take the drug. There are currently no clinically available pharmacotherapies to treat cocaine addiction. Through a broad screen of innate immune mediators, we identify granulocyte-colony stimulating factor (G-CSF) as a potent mediator of cocaine-induced adaptations. Here we report that G-CSF potentiates cocaine-induced increases in neural activity in the nucleus accumbens (NAc) and prefrontal cortex. In addition, G-CSF injections potentiate cocaine place preference and enhance motivation to self-administer cocaine, while not affecting responses to natural rewards. Infusion of G-CSF neutralizing antibody into NAc blocks the ability of G-CSF to modulate cocaine's behavioral effects, providing a direct link between central G-CSF action in NAc and cocaine reward. These results demonstrate that manipulating G-CSF is sufficient to alter the motivation for cocaine, but not natural rewards, providing a pharmacotherapeutic avenue to manipulate addictive behaviors without abuse potential.

  13. Neural stem cells show bidirectional experience-dependent plasticity in the perinatal mammalian brain.

    Science.gov (United States)

    Kippin, Tod E; Cain, Sean W; Masum, Zahra; Ralph, Martin R

    2004-03-17

    Many of the effects of prenatal stress on the endocrine function, brain morphology, and behavior in mammals can be reversed by brief sessions of postnatal separation and handling. We have tested the hypothesis that the effects of both the prenatal and postnatal experiences are mediated by negative and positive regulation of neural stem cell (NSC) number during critical stages in neurodevelopment. We used the in vitro clonal neurosphere assay to quantify NSCs in hamsters that had experienced prenatal stress (maternal restraint stress for 2 hr per day, for the last 7 d of gestation), postnatal handling (maternal-offspring separation for 15 min per day during postnatal days 1-21), orboth. Prenatal stress reduced the number of NSCs derived from the subependyma of the lateral ventricle. The effect was already present at postnatal day 1 and persisted into adulthood (at least 14 months of age). Similarly, prenatal stress reduced in vivo proliferation in the adult subependyma of the lateral ventricle. Conversely, postnatal handling increased NSC number and reversed the effect of prenatal stress. The effects of prenatal stress on NSCs and proliferation and the effect of postnatal handling on NSCs did not differ between male and females. The findings demonstrate that environmental factors can produce changes in NSC number that are present at birth and endure into late adulthood. These changes may underlie some of the behavioral effects produced by prenatal stress and postnatal handling.

  14. Excessive Sensory Stimulation during Development Alters Neural Plasticity and Vulnerability to Cocaine in Mice.

    Science.gov (United States)

    Ravinder, Shilpa; Donckels, Elizabeth A; Ramirez, Julian S B; Christakis, Dimitri A; Ramirez, Jan-Marino; Ferguson, Susan M

    2016-01-01

    Early life experiences affect the formation of neuronal networks, which can have a profound impact on brain function and behavior later in life. Previous work has shown that mice exposed to excessive sensory stimulation during development are hyperactive and novelty seeking, and display impaired cognition compared with controls. In this study, we addressed the issue of whether excessive sensory stimulation during development could alter behaviors related to addiction and underlying circuitry in CD-1 mice. We found that the reinforcing properties of cocaine were significantly enhanced in mice exposed to excessive sensory stimulation. Moreover, although these mice displayed hyperactivity that became more pronounced over time, they showed impaired persistence of cocaine-induced locomotor sensitization. These behavioral effects were associated with alterations in glutamatergic transmission in the nucleus accumbens and amygdala. Together, these findings suggest that excessive sensory stimulation in early life significantly alters drug reward and the neural circuits that regulate addiction and attention deficit hyperactivity. These observations highlight the consequences of early life experiences and may have important implications for children growing up in today's complex technological environment.

  15. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    Science.gov (United States)

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  16. Brain lateralization and neural plasticity for musical and cognitive abilities in an epileptic musician

    Directory of Open Access Journals (Sweden)

    Isabel eTrujillo-Pozo

    2013-12-01

    Full Text Available The use of intracarotid propofol procedure (IPP when assessing musical lateralization has not been reported in literature up to now. This procedure (similar to Wada Test has provided the opportunity to investigate not only lateralization of language and memory functions on epileptic patients but also offers a functional mapping approach with superior spatial and temporal resolution to analyze the lateralization of musical abilities. Findings in literature suggest that musical training modifies functional and structural brain organization. We studied hemispheric lateralization in a professional musician, a 33 years old woman with refractory left medial temporal lobe epilepsy. A longitudinal neuropsychological study was performed over a period of 21 months. Before epilepsy surgery, musical abilities, language and memory were tested during IPP by means of a novel and exhaustive neuropsychological battery focusing on the processing of music. We used a selection of stimuli to analyze listening, score reading, and tempo discrimination. Our results suggested that IPP is an excellent method to determine not only language, semantic and episodic memory, but also musical dominance in a professional musician who may be candidate for epilepsy surgery. Neuropsychological testing revealed that right hemisphere’s patient is involved in semantic and episodic musical memory processes, whereas her score reading and tempo processing require contribution from both hemispheres. At 1-year follow-up, outcome was excellent with respect to seizures and professional skills, meanwhile cognitive abilities improved. These findings indicate that IPP helps to predict who might be at risk for postoperative musical, language and memory deficits after epilepsy surgery. Our research suggests that musical expertise and epilepsy critically modifies long-term memory processes and induces brain structural and functional plasticity.

  17. Neural Plasticity: Single Neuron Models for Discrimination and Generalization and AN Experimental Ensemble Approach.

    Science.gov (United States)

    Munro, Paul Wesley

    A special form for modification of neuronal response properties is described in which the change in the synaptic state vector is parallel to the vector of afferent activity. This process is termed "parallel modification" and its theoretical and experimental implications are examined. A theoretical framework has been devised to describe the complementary functions of generalization and discrimination by single neurons. This constitutes a basis for three models each describing processes for the development of maximum selectivity (discrimination) and minimum selectivity (generalization) by neurons. Strengthening and weakening of synapses is expressed as a product of the presynaptic activity and a nonlinear modulatory function of two postsynaptic variables--namely a measure of the spatially integrated activity of the cell and a temporal integration (time-average) of that activity. Some theorems are given for low-dimensional systems and computer simulation results from more complex systems are discussed. Model neurons that achieve high selectivity mimic the development of cat visual cortex neurons in a wide variety of rearing conditions. A role for low-selectivity neurons is proposed in which they provide inhibitory input to neurons of the opposite type, thereby suppressing the common component of a pattern class and enhancing their selective properties. Such contrast-enhancing circuits are analyzed and supported by computer simulation. To enable maximum selectivity, the net inhibition to a cell must become strong enough to offset whatever excitation is produced by the non-preferred patterns. Ramifications of parallel models for certain experimental paradigms are analyzed. A methodology is outlined for testing synaptic modification hypotheses in the laboratory. A plastic projection from one neuronal population to another will attain stable equilibrium under periodic electrical stimulation of constant intensity. The perturbative effect of shifting this intensity level

  18. Maximum entropy methods for extracting the learned features of deep neural networks.

    Science.gov (United States)

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.

  19. Network Supervision of Adult Experience and Learning Dependent Sensory Cortical Plasticity.

    Science.gov (United States)

    Blake, David T

    2017-06-18

    The brain is capable of remodeling throughout life. The sensory cortices provide a useful preparation for studying neuroplasticity both during development and thereafter. In adulthood, sensory cortices change in the cortical area activated by behaviorally relevant stimuli, by the strength of response within that activated area, and by the temporal profiles of those responses. Evidence supports forms of unsupervised, reinforcement, and fully supervised network learning rules. Studies on experience-dependent plasticity have mostly not controlled for learning, and they find support for unsupervised learning mechanisms. Changes occur with greatest ease in neurons containing α-CamKII, which are pyramidal neurons in layers II/III and layers V/VI. These changes use synaptic mechanisms including long term depression. Synaptic strengthening at NMDA-containing synapses does occur, but its weak association with activity suggests other factors also initiate changes. Studies that control learning find support of reinforcement learning rules and limited evidence of other forms of supervised learning. Behaviorally associating a stimulus with reinforcement leads to a strengthening of cortical response strength and enlarging of response area with poor selectivity. Associating a stimulus with omission of reinforcement leads to a selective weakening of responses. In some preparations in which these associations are not as clearly made, neurons with the most informative discharges are relatively stronger after training. Studies analyzing the temporal profile of responses associated with omission of reward, or of plasticity in studies with different discriminanda but statistically matched stimuli, support the existence of limited supervised network learning. © 2017 American Physiological Society. Compr Physiol 7:977-1008, 2017. Copyright © 2017 John Wiley & Sons, Inc.

  20. A BCM theory of meta-plasticity for online self-reorganizing fuzzy-associative learning.

    Science.gov (United States)

    Tan, Javan; Quek, Chai

    2010-06-01

    Self-organizing neurofuzzy approaches have matured in their online learning of fuzzy-associative structures under time-invariant conditions. To maximize their operative value for online reasoning, these self-sustaining mechanisms must also be able to reorganize fuzzy-associative knowledge in real-time dynamic environments. Hence, it is critical to recognize that they would require self-reorganizational skills to rebuild fluid associative structures when their existing organizations fail to respond well to changing circumstances. In this light, while Hebbian theory (Hebb, 1949) is the basic computational framework for associative learning, it is less attractive for time-variant online learning because it suffers from stability limitations that impedes unlearning. Instead, this paper adopts the Bienenstock-Cooper-Munro (BCM) theory of neurological learning via meta-plasticity principles (Bienenstock et al., 1982) that provides for both online associative and dissociative learning. For almost three decades, BCM theory has been shown to effectively brace physiological evidence of synaptic potentiation (association) and depression (dissociation) into a sound mathematical framework for computational learning. This paper proposes an interpretation of the BCM theory of meta-plasticity for an online self-reorganizing fuzzy-associative learning system to realize online-reasoning capabilities. Experimental findings are twofold: 1) the analysis using S&P-500 stock index illustrated that the self-reorganizing approach could follow the trajectory shifts in the time-variant S&P-500 index for about 60 years, and 2) the benchmark profiles showed that the fuzzy-associative approach yielded comparable results with other fuzzy-precision models with similar online objectives.

  1. Focal adhesion kinase regulates neuronal growth, synaptic plasticity and hippocampus-dependent spatial learning and memory.

    Science.gov (United States)

    Monje, Francisco J; Kim, Eun-Jung; Pollak, Daniela D; Cabatic, Maureen; Li, Lin; Baston, Arthur; Lubec, Gert

    2012-01-01

    The focal adhesion kinase (FAK) is a non-receptor tyrosine kinase abundantly expressed in the mammalian brain and highly enriched in neuronal growth cones. Inhibitory and facilitatory activities of FAK on neuronal growth have been reported and its role in neuritic outgrowth remains controversial. Unlike other tyrosine kinases, such as the neurotrophin receptors regulating neuronal growth and plasticity, the relevance of FAK for learning and memory in vivo has not been clearly defined yet. A comprehensive study aimed at determining the role of FAK in neuronal growth, neurotransmitter release and synaptic plasticity in hippocampal neurons and in hippocampus-dependent learning and memory was therefore undertaken using the mouse model. Gain- and loss-of-function experiments indicated that FAK is a critical regulator of hippocampal cell morphology. FAK mediated neurotrophin-induced neuritic outgrowth and FAK inhibition affected both miniature excitatory postsynaptic potentials and activity-dependent hippocampal long-term potentiation prompting us to explore the possible role of FAK in spatial learning and memory in vivo. Our data indicate that FAK has a growth-promoting effect, is importantly involved in the regulation of the synaptic function and mediates in vivo hippocampus-dependent spatial learning and memory. Copyright © 2011 S. Karger AG, Basel.

  2. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  3. ELeaRNT: Evolutionary Learning of Rich Neural Network Topologies

    National Research Council Canada - National Science Library

    Matteucci, Matteo

    2006-01-01

    In this paper we present ELeaRNT an evolutionary strategy which evolves rich neural network topologies in order to find an optimal domain specific non linear function approximator with a good generalization performance...

  4. Doubly stochastic Poisson processes in artificial neural learning.

    Science.gov (United States)

    Card, H C

    1998-01-01

    This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.

  5. Fast-Spiking Interneurons Supply Feedforward Control of Bursting, Calcium, and Plasticity for Efficient Learning.

    Science.gov (United States)

    Owen, Scott F; Berke, Joshua D; Kreitzer, Anatol C

    2018-02-08

    Fast-spiking interneurons (FSIs) are a prominent class of forebrain GABAergic cells implicated in two seemingly independent network functions: gain control and network plasticity. Little is known, however, about how these roles interact. Here, we use a combination of cell-type-specific ablation, optogenetics, electrophysiology, imaging, and behavior to describe a unified mechanism by which striatal FSIs control burst firing, calcium influx, and synaptic plasticity in neighboring medium spiny projection neurons (MSNs). In vivo silencing of FSIs increased bursting, calcium transients, and AMPA/NMDA ratios in MSNs. In a motor sequence task, FSI silencing increased the frequency of calcium transients but reduced the specificity with which transients aligned to individual task events. Consistent with this, ablation of FSIs disrupted the acquisition of striatum-dependent egocentric learning strategies. Together, our data support a model in which feedforward inhibition from FSIs temporally restricts MSN bursting and calcium-dependent synaptic plasticity to facilitate striatum-dependent sequence learning. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Differences between Neural Activity in Prefrontal Cortex and Striatum during Learning of Novel Abstract Categories

    OpenAIRE

    Antzoulatos, Evan G.; Miller, Earl K.

    2011-01-01

    Learning to classify diverse experiences into meaningful groups, like categories, is fundamental to normal cognition. To understand its neural basis, we simultaneously recorded from multiple electrodes in the lateral prefrontal cortex and dorsal striatum, two interconnected brain structures critical for learning. Each day, monkeys learned to associate novel, abstract dot-based categories with a right vs. left saccade. Early on, when they could acquire specific stimulus-response associations, ...

  7. musical mnemonics aid verbal memory and induce learning related brain plasticity in multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Michael eThaut

    2014-06-01

    Full Text Available Recent research in music and brain function has suggested that the temporal pattern structure in music andrhythm can enhance cognitive functions. To further elucidate this question specifically for memory weinvestigated if a musical template can enhance verbal learning in patients with multiple sclerosis and ifmusic assisted learning will also influence short-term, system-level brain plasticity. We measuredsystems-level brain activity with oscillatory network synchronization during music assisted learning.Specifically, we measured the spectral power of 128-channel electroencephalogram (EEG in alpha andbeta frequency bands in 54 patients with multiple sclerosis (MS. The study sample was randomlydivided into 2 groups, either hearing a spoken or musical (sung presentation of Rey’s Auditory VerbalLearning Test (RAVLT. We defined the learning-related synchronization (LRS as the percent changein EEG spectral power from the first time the word was presented to the average of the subsequent wordencoding trials. LRS differed significantly between the music and spoken conditions in low alpha andupper beta bands. Patients in the music condition showed overall better word memory and better wordorder memory and stronger bilateral frontal alpha LRS than patients in the spoken condition. Theevidence suggests that a musical mnemonic recruits stronger oscillatory network synchronization inprefrontal areas in MS patients during word learning. It is suggested that the temporal structure implicitin musical stimuli enhances ‘deep encoding’ during verbal learning and sharpens the timing of neuraldynamics in brain networks degraded by demyelination in MS

  8. Breast Cancer Diagnosis using Artificial Neural Networks with Extreme Learning Techniques

    OpenAIRE

    Chandra Prasetyo Utomo; Aan Kardiana; Rika Yuliwulandari

    2014-01-01

    Breast cancer is the second cause of dead among women. Early detection followed by appropriate cancer treatment can reduce the deadly risk. Medical professionals can make mistakes while identifying a disease. The help of technology such as data mining and machine learning can substantially improve the diagnosis accuracy. Artificial Neural Networks (ANN) has been widely used in intelligent breast cancer diagnosis. However, the standard Gradient-Based Back Propagation Artificial Neural Networks...

  9. NPY gene transfer in the hippocampus attenuates synaptic plasticity and learning

    DEFF Research Database (Denmark)

    Sørensen, Andreas T; Kanter-Schlifke, Irene; Carli, Mirjana

    2008-01-01

    -mediated mechanisms. In addition, transgene NPY seems to be released during high frequency neuronal activity, leading to decreased glutamate release in excitatory synapses. Importantly, memory consolidation appears to be affected by the treatment. We found that long-term potentiation (LTP) in the CA1 area...... processing. Here we show, by electrophysiological recordings in CA1 of the hippocampal formation of rats, that hippocampal NPY gene transfer into the intact brain does not affect basal synaptic transmission, but slightly alters short-term synaptic plasticity, most likely via NPY Y2 receptor....... Future clinical progress, however, requires more detailed evaluation of possible side effects of this treatment. Until now it has been unknown whether rAAV vector-based NPY overexpression in the hippocampus alters normal synaptic transmission and plasticity, which could disturb learning and memory...

  10. The artist emerges: visual art learning alters neural structure and function.

    Science.gov (United States)

    Schlegel, Alexander; Alexander, Prescott; Fogelson, Sergey V; Li, Xueting; Lu, Zhengang; Kohler, Peter J; Riley, Enrico; Tse, Peter U; Meng, Ming

    2015-01-15

    How does the brain mediate visual artistic creativity? Here we studied behavioral and neural changes in drawing and painting students compared to students who did not study art. We investigated three aspects of cognition vital to many visual artists: creative cognition, perception, and perception-to-action. We found that the art students became more creative via the reorganization of prefrontal white matter but did not find any significant changes in perceptual ability or related neural activity in the art students relative to the control group. Moreover, the art students improved in their ability to sketch human figures from observation, and multivariate patterns of cortical and cerebellar activity evoked by this drawing task became increasingly separable between art and non-art students. Our findings suggest that the emergence of visual artistic skills is supported by plasticity in neural pathways that enable creative cognition and mediate perceptuomotor integration. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Neural control of locomotion and training-induced plasticity after spinal and cerebral lesions.

    Science.gov (United States)

    Knikou, Maria

    2010-10-01

    Standing and walking require a plethora of sensorimotor interactions that occur throughout the nervous system. Sensory afferent feedback plays a crucial role in the rhythmical muscle activation pattern, as it affects through spinal reflex circuits the spinal neuronal networks responsible for inducing and maintaining rhythmicity, drives short-term and long-term re-organization of the brain and spinal cord circuits, and contributes to recovery of walking after locomotor training. Therefore, spinal circuits integrating sensory signals are adjustable networks with learning capabilities. In this review, I will synthesize the mechanisms underlying phase-dependent modulation of spinal reflexes in healthy humans as well as those with spinal or cerebral lesions along with findings on afferent regulation of spinal reflexes and central pattern generator in reduced animal preparations. Recovery of walking after locomotor training has been documented in numerous studies but the re-organization of spinal interneuronal and cortical circuits need to be further explored at cellular and physiological levels. For maximizing sensorimotor recovery in people with spinal or cerebral lesions, a multidisciplinary approach (rehabilitation, pharmacology, and electrical stimulation) delivered during various sensorimotor constraints is needed. Copyright 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Strategies influence neural activity for feedback learning across child and adolescent development.

    Science.gov (United States)

    Peters, Sabine; Koolschijn, P Cédric M P; Crone, Eveline A; Van Duijvenvoorde, Anna C K; Raijmakers, Maartje E J

    2014-09-01

    Learning from feedback is an important aspect of executive functioning that shows profound improvements during childhood and adolescence. This is accompanied by neural changes in the feedback-learning network, which includes pre-supplementary motor area (pre- SMA)/anterior cingulate cortex (ACC), dorsolateral prefrontal cortex (DLPFC), superior parietal cortex (SPC), and the basal ganglia. However, there can be considerable differences within age ranges in performance that are ascribed to differences in strategy use. This is problematic for traditional approaches of analyzing developmental data, in which age groups are assumed to be homogenous in strategy use. In this study, we used latent variable models to investigate if underlying strategy groups could be detected for a feedback-learning task and whether there were differences in neural activation patterns between strategies. In a sample of 268 participants between ages 8 to 25 years, we observed four underlying strategy groups, which were cut across age groups and varied in the optimality of executive functioning. These strategy groups also differed in neural activity during learning; especially the most optimal performing group showed more activity in DLPFC, SPC and pre-SMA/ACC compared to the other groups. However, age differences remained an important contributor to neural activation, even when correcting for strategy. These findings contribute to the debate of age versus performance predictors of neural development, and highlight the importance of studying individual differences in strategy use when studying development. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Hypothetical Pattern Recognition Design Using Multi-Layer Perceptorn Neural Network For Supervised Learning

    Directory of Open Access Journals (Sweden)

    Md. Abdullah-al-mamun

    2015-08-01

    Full Text Available Abstract Humans are capable to identifying diverse shape in the different pattern in the real world as effortless fashion due to their intelligence is grow since born with facing several learning process. Same way we can prepared an machine using human like brain called Artificial Neural Network that can be recognize different pattern from the real world object. Although the various techniques is exists to implementation the pattern recognition but recently the artificial neural network approaches have been giving the significant attention. Because the approached of artificial neural network is like a human brain that is learn from different observation and give a decision the previously learning rule. Over the 50 years research now a days pattern recognition for machine learning using artificial neural network got a significant achievement. For this reason many real world problem can be solve by modeling the pattern recognition process. The objective of this paper is to present the theoretical concept for pattern recognition design using Multi-Layer Perceptorn neural networkin the algorithm of artificial Intelligence as the best possible way of utilizing available resources to make a decision that can be a human like performance.

  15. Supervised neural network modeling: an empirical investigation into learning from imbalanced data with labeling errors.

    Science.gov (United States)

    Khoshgoftaar, Taghi M; Van Hulse, Jason; Napolitano, Amri

    2010-05-01

    Neural network algorithms such as multilayer perceptrons (MLPs) and radial basis function networks (RBFNets) have been used to construct learners which exhibit strong predictive performance. Two data related issues that can have a detrimental impact on supervised learning initiatives are class imbalance and labeling errors (or class noise). Imbalanced data can make it more difficult for the neural network learning algorithms to distinguish between examples of the various classes, and class noise can lead to the formulation of incorrect hypotheses. Both class imbalance and labeling errors are pervasive problems encountered in a wide variety of application domains. Many studies have been performed to investigate these problems in isolation, but few have focused on their combined effects. This study presents a comprehensive empirical investigation using neural network algorithms to learn from imbalanced data with labeling errors. In particular, the first component of our study investigates the impact of class noise and class imbalance on two common neural network learning algorithms, while the second component considers the ability of data sampling (which is commonly used to address the issue of class imbalance) to improve their performances. Our results, for which over two million models were trained and evaluated, show that conclusions drawn using the more commonly studied C4.5 classifier may not apply when using neural networks.

  16. Lifelong learning of human actions with deep neural network self-organization.

    Science.gov (United States)

    Parisi, German I; Tani, Jun; Weber, Cornelius; Wermter, Stefan

    2017-12-01

    Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  17. Dopamine Regulates Aversive Contextual Learning and Associated In Vivo Synaptic Plasticity in the Hippocampus

    Directory of Open Access Journals (Sweden)

    John I. Broussard

    2016-03-01

    Full Text Available Dopamine release during reward-driven behaviors influences synaptic plasticity. However, dopamine innervation and release in the hippocampus and its role during aversive behaviors are controversial. Here, we show that in vivo hippocampal synaptic plasticity in the CA3-CA1 circuit underlies contextual learning during inhibitory avoidance (IA training. Immunohistochemistry and molecular techniques verified sparse dopaminergic innervation of the hippocampus from the midbrain. The long-term synaptic potentiation (LTP underlying the learning of IA was assessed with a D1-like dopamine receptor agonist or antagonist in ex vivo hippocampal slices and in vivo in freely moving mice. Inhibition of D1-like dopamine receptors impaired memory of the IA task and prevented the training-induced enhancement of both ex vivo and in vivo LTP induction. The results indicate that dopamine-receptor signaling during an aversive contextual task regulates aversive memory retention and regulates associated synaptic mechanisms in the hippocampus that likely underlie learning.

  18. Minimal-Learning-Parameter Technique Based Adaptive Neural Sliding Mode Control of MEMS Gyroscope

    Directory of Open Access Journals (Sweden)

    Bin Xu

    2017-01-01

    Full Text Available This paper investigates an adaptive neural sliding mode controller for MEMS gyroscopes with minimal-learning-parameter technique. Considering the system uncertainty in dynamics, neural network is employed for approximation. Minimal-learning-parameter technique is constructed to decrease the number of update parameters, and in this way the computation burden is greatly reduced. Sliding mode control is designed to cancel the effect of time-varying disturbance. The closed-loop stability analysis is established via Lyapunov approach. Simulation results are presented to demonstrate the effectiveness of the method.

  19. Automated sleep stage detection with a classical and a neural learning algorithm--methodological aspects.

    Science.gov (United States)

    Schwaibold, M; Schöchlin, J; Bolz, A

    2002-01-01

    For classification tasks in biosignal processing, several strategies and algorithms can be used. Knowledge-based systems allow prior knowledge about the decision process to be integrated, both by the developer and by self-learning capabilities. For the classification stages in a sleep stage detection framework, three inference strategies were compared regarding their specific strengths: a classical signal processing approach, artificial neural networks and neuro-fuzzy systems. Methodological aspects were assessed to attain optimum performance and maximum transparency for the user. Due to their effective and robust learning behavior, artificial neural networks could be recommended for pattern recognition, while neuro-fuzzy systems performed best for the processing of contextual information.

  20. Comparison of learning anatomy with cadaveric dissection and plastic models by medical students

    International Nuclear Information System (INIS)

    Qamar, K.; Ashar, A.

    2014-01-01

    The purpose of this study at Army Medical College was to assess differences in learning of students from cadaveric dissection or plastic models; and explore their perceptions about efficacy of various Instructional tools used during the gross anatomy practical time. Study Design: Two phase mixed methods sequential study. Place and Duration of Study: This study was conducted at anatomy department Arm y Medical College, Rawalpindi, Pakistan over a period of three weeks In July 2013 after approval from the ethical review board. Participants and Methods: Quantiative phase 1 involved 50 second year MBBS students, selected through non probability convenience sampling. They were divided into two groups of 25 students. Group A covered head and neck gross anatomy dissection course through cadaveric dissection and group B using plastic models. At the end of course MCQ based assessment were conducted and statistically analyzed for both groups. In qualitative phase 2, two focus group discussions (FGD) with 10 second year MBBS students were conducted to explore students perspectives about and their preferences of various instructional tools used during the gross anatomy practical time. The FGDs were audio taped, transcribed, and analyzed through thematic analysis. Results: The results of a post test of group A was 24.1 +-.26 and group B 30.96 +- 6.23 (p = 0.024). Focus group discussions generated three themes (Learning techniques used by students during gross anatomy practical time; Preferred learning techniques and Non-preferred learning techniques). Students prefered small-group learning method over completely self-directed studies as the study materials were carefully chosen and objectives were clearly demonstrated with directions. Cadaveric dissection and didactic teachings were not preferred. (author)

  1. Motor sequence learning-induced neural efficiency in functional brain connectivity.

    Science.gov (United States)

    Karim, Helmet T; Huppert, Theodore J; Erickson, Kirk I; Wollam, Mariegold E; Sparto, Patrick J; Sejdić, Ervin; VanSwearingen, Jessie M

    2017-02-15

    Previous studies have shown the functional neural circuitry differences before and after an explicitly learned motor sequence task, but have not assessed these changes during the process of motor skill learning. Functional magnetic resonance imaging activity was measured while participants (n=13) were asked to tap their fingers to visually presented sequences in blocks that were either the same sequence repeated (learning block) or random sequences (control block). Motor learning was associated with a decrease in brain activity during learning compared to control. Lower brain activation was noted in the posterior parietal association area and bilateral thalamus during the later periods of learning (not during the control). Compared to the control condition, we found the task-related motor learning was associated with decreased connectivity between the putamen and left inferior frontal gyrus and left middle cingulate brain regions. Motor learning was associated with changes in network activity, spatial extent, and connectivity. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Neural Correlates of Olfactory Learning: Critical Role of Centrifugal Neuromodulation

    Science.gov (United States)

    Fletcher, Max L.; Chen, Wei R.

    2010-01-01

    The mammalian olfactory system is well established for its remarkable capability of undergoing experience-dependent plasticity. Although this process involves changes at multiple stages throughout the central olfactory pathway, even the early stages of processing, such as the olfactory bulb and piriform cortex, can display a high degree of…

  3. NMDA Receptor Subunits Change after Synaptic Plasticity Induction and Learning and Memory Acquisition

    Directory of Open Access Journals (Sweden)

    María Verónica Baez

    2018-01-01

    Full Text Available NMDA ionotropic glutamate receptors (NMDARs are crucial in activity-dependent synaptic changes and in learning and memory. NMDARs are composed of two GluN1 essential subunits and two regulatory subunits which define their pharmacological and physiological profile. In CNS structures involved in cognitive functions as the hippocampus and prefrontal cortex, GluN2A and GluN2B are major regulatory subunits; their expression is dynamic and tightly regulated, but little is known about specific changes after plasticity induction or memory acquisition. Data strongly suggest that following appropriate stimulation, there is a rapid increase in surface GluN2A-NMDAR at the postsynapses, attributed to lateral receptor mobilization from adjacent locations. Whenever synaptic plasticity is induced or memory is consolidated, more GluN2A-NMDARs are assembled likely using GluN2A from a local translation and GluN1 from local ER. Later on, NMDARs are mobilized from other pools, and there are de novo syntheses at the neuron soma. Changes in GluN1 or NMDAR levels induced by synaptic plasticity and by spatial memory formation seem to occur in different waves of NMDAR transport/expression/degradation, with a net increase at the postsynaptic side and a rise in expression at both the spine and neuronal soma. This review aims to put together that information and the proposed hypotheses.

  4. Brain Plasticity and Motor Practice in Cognitive Aging

    Directory of Open Access Journals (Sweden)

    Liuyang eCai

    2014-03-01

    Full Text Available For more than two decades, there have been extensive studies of experience-based neural plasticity exploring effective applications of brain plasticity for cognitive and motor development. Research suggests that human brains continuously undergo structural reorganization and functional changes in response to stimulations or training. From a developmental point of view, the assumption of lifespan brain plasticity has been extended to older adults in terms of the benefits of cognitive training and physical therapy. To summarize recent developments, first, we introduce the concept of neural plasticity from a developmental perspective. Secondly, we note that motor learning often refers to deliberate practice and the resulting performance enhancement and adaptability. We discuss the close interplay between neural plasticity, motor learning and cognitive aging. Thirdly, we review research on motor skill acquisition in older adults with, and without, impairments relative to aging-related cognitive decline. Finally, to enhance future research and application, we highlight the implications of neural plasticity in skills learning and cognitive rehabilitation for the aging population.

  5. Application of artificial neural network with extreme learning machine for economic growth estimation

    Science.gov (United States)

    Milačić, Ljubiša; Jović, Srđan; Vujović, Tanja; Miljković, Jovica

    2017-01-01

    The purpose of this research is to develop and apply the artificial neural network (ANN) with extreme learning machine (ELM) to forecast gross domestic product (GDP) growth rate. The economic growth forecasting was analyzed based on agriculture, manufacturing, industry and services value added in GDP. The results were compared with ANN with back propagation (BP) learning approach since BP could be considered as conventional learning methodology. The reliability of the computational models was accessed based on simulation results and using several statistical indicators. Based on results, it was shown that ANN with ELM learning methodology can be applied effectively in applications of GDP forecasting.

  6. Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns.

    Directory of Open Access Journals (Sweden)

    Qiang Yu

    Full Text Available A new learning rule (Precise-Spike-Driven (PSD Synaptic Plasticity is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.

  7. Motor learning induces plastic changes in Purkinje cell dendritic spines in the rat cerebellum.

    Science.gov (United States)

    González-Tapia, D; González-Ramírez, M M; Vázquez-Hernández, N; González-Burgos, I

    2017-12-14

    The paramedian lobule of the cerebellum is involved in learning to correctly perform motor skills through practice. Dendritic spines are dynamic structures that regulate excitatory synaptic stimulation. We studied plastic changes occurring in the dendritic spines of Purkinje cells from the paramedian lobule of rats during motor learning. Adult male rats were trained over a 6-day period using an acrobatic motor learning paradigm; the density and type of dendritic spines were determined every day during the study period using a modified version of the Golgi method. The learning curve reflected a considerable decrease in the number of errors made by rats as the training period progressed. We observed more dendritic spines on days 2 and 6, particularly more thin spines on days 1, 3, and 6, fewer mushroom spines on day 3, fewer stubby spines on day 1, and more thick spines on days 4 and 6. The initial stage of motor learning may be associated with fast processing of the underlying synaptic information combined with an apparent "silencing" of memory consolidation processes, based on the regulation of the neuronal excitability. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns.

    Science.gov (United States)

    Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou

    2013-01-01

    A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.

  9. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    Science.gov (United States)

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Forecasting crude oil price with an EMD-based neural network ensemble learning paradigm

    International Nuclear Information System (INIS)

    Yu, Lean; Wang, Shouyang; Lai, Kin Keung

    2008-01-01

    In this study, an empirical mode decomposition (EMD) based neural network ensemble learning paradigm is proposed for world crude oil spot price forecasting. For this purpose, the original crude oil spot price series were first decomposed into a finite, and often small, number of intrinsic mode functions (IMFs). Then a three-layer feed-forward neural network (FNN) model was used to model each of the extracted IMFs, so that the tendencies of these IMFs could be accurately predicted. Finally, the prediction results of all IMFs are combined with an adaptive linear neural network (ALNN), to formulate an ensemble output for the original crude oil price series. For verification and testing, two main crude oil price series, West Texas Intermediate (WTI) crude oil spot price and Brent crude oil spot price, are used to test the effectiveness of the proposed EMD-based neural network ensemble learning methodology. Empirical results obtained demonstrate attractiveness of the proposed EMD-based neural network ensemble learning paradigm. (author)

  11. Unsupervised learning by spike timing dependent plasticity in phase change memory (PCM synapses

    Directory of Open Access Journals (Sweden)

    Stefano eAmbrogio

    2016-03-01

    Full Text Available We present a novel one-transistor/one-resistor (1T1R synapse for neuromorphic networks, based on phase change memory (PCM technology. The synapse is capable of spike-timing dependent plasticity (STDP, where gradual potentiation relies on set transition, namely crystallization, in the PCM, while depression is achieved via reset or amorphization of a chalcogenide active volume. STDP characteristics are demonstrated by experiments under variable initial conditions and number of pulses. Finally, we support the applicability of the 1T1R synapse for learning and recognition of visual patterns by simulations of fully connected neuromorphic networks with 2 or 3 layers with high recognition efficiency. The proposed scheme provides a feasible low-power solution for on-line unsupervised machine learning in smart reconfigurable sensors.

  12. A neural learning classifier system with self-adaptive constructivism for mobile robot control.

    Science.gov (United States)

    Hurst, Jacob; Bull, Larry

    2006-01-01

    For artificial entities to achieve true autonomy and display complex lifelike behavior, they will need to exploit appropriate adaptable learning algorithms. In this context adaptability implies flexibility guided by the environment at any given time and an open-ended ability to learn appropriate behaviors. This article examines the use of constructivism-inspired mechanisms within a neural learning classifier system architecture that exploits parameter self-adaptation as an approach to realize such behavior. The system uses a rule structure in which each rule is represented by an artificial neural network. It is shown that appropriate internal rule complexity emerges during learning at a rate controlled by the learner and that the structure indicates underlying features of the task. Results are presented in simulated mazes before moving to a mobile robot platform.

  13. Opponent appetitive-aversive neural processes underlie predictive learning of pain relief.

    Science.gov (United States)

    Seymour, Ben; O'Doherty, John P; Koltzenburg, Martin; Wiech, Katja; Frackowiak, Richard; Friston, Karl; Dolan, Raymond

    2005-09-01

    Termination of a painful or unpleasant event can be rewarding. However, whether the brain treats relief in a similar way as it treats natural reward is unclear, and the neural processes that underlie its representation as a motivational goal remain poorly understood. We used fMRI (functional magnetic resonance imaging) to investigate how humans learn to generate expectations of pain relief. Using a pavlovian conditioning procedure, we show that subjects experiencing prolonged experimentally induced pain can be conditioned to predict pain relief. This proceeds in a manner consistent with contemporary reward-learning theory (average reward/loss reinforcement learning), reflected by neural activity in the amygdala and midbrain. Furthermore, these reward-like learning signals are mirrored by opposite aversion-like signals in lateral orbitofrontal cortex and anterior cingulate cortex. This dual coding has parallels to 'opponent process' theories in psychology and promotes a formal account of prediction and expectation during pain.

  14. Continual and One-Shot Learning Through Neural Networks with Dynamic External Memory

    DEFF Research Database (Denmark)

    Lüders, Benno; Schläger, Mikkel; Korach, Aleksandra

    2017-01-01

    it easier to find unused memory location and therefor facilitates the evolution of continual learning networks. Our results suggest that augmenting evolving networks with an external memory component is not only a viable mechanism for adaptive behaviors in neuroevolution but also allows these networks...... a new task is learned. This paper takes a step in overcoming this limitation by building on the recently proposed Evolving Neural Turing Machine (ENTM) approach. In the ENTM, neural networks are augmented with an external memory component that they can write to and read from, which allows them to store...... associations quickly and over long periods of time. The results in this paper demonstrate that the ENTM is able to perform one-shot learning in reinforcement learning tasks without catastrophic forgetting of previously stored associations. Additionally, we introduce a new ENTM default jump mechanism that makes...

  15. Chaos Synchronization Using Adaptive Dynamic Neural Network Controller with Variable Learning Rates

    Directory of Open Access Journals (Sweden)

    Chih-Hong Kao

    2011-01-01

    Full Text Available This paper addresses the synchronization of chaotic gyros with unknown parameters and external disturbance via an adaptive dynamic neural network control (ADNNC system. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic RBF (DRBF network to online approximate an ideal controller. The DRBF network can create new hidden neurons online if the input data falls outside the hidden layer and prune the insignificant hidden neurons online if the hidden neuron is inappropriate. The smooth compensator is designed to compensate for the approximation error between the neural controller and the ideal controller. Moreover, the variable learning rates of the parameter adaptation laws are derived based on a discrete-type Lyapunov function to speed up the convergence rate of the tracking error. Finally, the simulation results which verified the chaotic behavior of two nonlinear identical chaotic gyros can be synchronized using the proposed ADNNC scheme.

  16. Stimulating neural plasticity with real-time fMRI neurofeedback in Huntington's disease: A proof of concept study.

    Science.gov (United States)

    Papoutsi, Marina; Weiskopf, Nikolaus; Langbehn, Douglas; Reilmann, Ralf; Rees, Geraint; Tabrizi, Sarah J

    2018-03-01

    Novel methods that stimulate neuroplasticity are increasingly being studied to treat neurological and psychiatric conditions. We sought to determine whether real-time fMRI neurofeedback training is feasible in Huntington's disease (HD), and assess any factors that contribute to its effectiveness. In this proof-of-concept study, we used this technique to train 10 patients with HD to volitionally regulate the activity of their supplementary motor area (SMA). We collected detailed behavioral and neuroimaging data before and after training to examine changes of brain function and structure, and cognitive and motor performance. We found that patients overall learned to increase activity of the target region during training with variable effects on cognitive and motor behavior. Improved cognitive and motor performance after training predicted increases in pre-SMA grey matter volume, fMRI activity in the left putamen, and increased SMA-left putamen functional connectivity. Although we did not directly target the putamen and corticostriatal connectivity during neurofeedback training, our results suggest that training the SMA can lead to regulation of associated networks with beneficial effects in behavior. We conclude that neurofeedback training can induce plasticity in patients with Huntington's disease despite the presence of neurodegeneration, and the effects of training a single region may engage other regions and circuits implicated in disease pathology. © 2017 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.

  17. Relationship Between Non-invasive Brain Stimulation-induced Plasticity and Capacity for Motor Learning.

    Science.gov (United States)

    López-Alonso, Virginia; Cheeran, Binith; Fernández-del-Olmo, Miguel

    2015-01-01

    Cortical plasticity plays a key role in motor learning (ML). Non-invasive brain stimulation (NIBS) paradigms have been used to modulate plasticity in the human motor cortex in order to facilitate ML. However, little is known about the relationship between NIBS-induced plasticity over M1 and ML capacity. NIBS-induced MEP changes are related to ML capacity. 56 subjects participated in three NIBS (paired associative stimulation, anodal transcranial direct current stimulation and intermittent theta-burst stimulation), and in three lab-based ML task (serial reaction time, visuomotor adaptation and sequential visual isometric pinch task) sessions. After clustering the patterns of response to the different NIBS protocols, we compared the ML variables between the different patterns found. We used regression analysis to explore further the relationship between ML capacity and summary measures of the MEPs change. We ran correlations with the "responders" group only. We found no differences in ML variables between clusters. Greater response to NIBS protocols may be predictive of poor performance within certain blocks of the VAT. "Responders" to AtDCS and to iTBS showed significantly faster reaction times than "non-responders." However, the physiological significance of these results is uncertain. MEP changes induced in M1 by PAS, AtDCS and iTBS appear to have little, if any, association with the ML capacity tested with the SRTT, the VAT and the SVIPT. However, cortical excitability changes induced in M1 by AtDCS and iTBS may be related to reaction time and retention of newly acquired skills in certain motor learning tasks. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Cerebellar Plasticity and Motor Learning Deficits in a Copy Number Variation Mouse Model of Autism

    Science.gov (United States)

    Piochon, Claire; Kloth, Alexander D; Grasselli, Giorgio; Titley, Heather K; Nakayama, Hisako; Hashimoto, Kouichi; Wan, Vivian; Simmons, Dana H; Eissa, Tahra; Nakatani, Jin; Cherskov, Adriana; Miyazaki, Taisuke; Watanabe, Masahiko; Takumi, Toru; Kano, Masanobu; Wang, Samuel S-H; Hansel, Christian

    2014-01-01

    A common feature of autism spectrum disorder (ASD) is the impairment of motor control and learning, occurring in a majority of children with autism, consistent with perturbation in cerebellar function. Here we report alterations in motor behavior and cerebellar synaptic plasticity in a mouse model (patDp/+) for the human 15q11-13 duplication, one of the most frequently observed genetic aberrations in autism. These mice show ASD-resembling social behavior deficits. We find that in patDp/+ mice delay eyeblink conditioning—a form of cerebellum-dependent motor learning—is impaired, and observe deregulation of a putative cellular mechanism for motor learning, long-term depression (LTD) at parallel fiber-Purkinje cell synapses. Moreover, developmental elimination of surplus climbing fibers—a model for activity-dependent synaptic pruning—is impaired. These findings point to deficits in synaptic plasticity and pruning as potential causes for motor problems and abnormal circuit development in autism. PMID:25418414

  19. Neural Pattern Similarity in the Left IFG and Fusiform Is Associated with Novel Word Learning

    Science.gov (United States)

    Qu, Jing; Qian, Liu; Chen, Chuansheng; Xue, Gui; Li, Huiling; Xie, Peng; Mei, Leilei

    2017-01-01

    Previous studies have revealed that greater neural pattern similarity across repetitions is associated with better subsequent memory. In this study, we used an artificial language training paradigm and representational similarity analysis to examine whether neural pattern similarity across repetitions before training was associated with post-training behavioral performance. Twenty-four native Chinese speakers were trained to learn a logographic artificial language for 12 days and behavioral performance was recorded using the word naming and picture naming tasks. Participants were scanned while performing a passive viewing task before training, after 4-day training and after 12-day training. Results showed that pattern similarity in the left pars opercularis (PO) and fusiform gyrus (FG) before training was negatively associated with reaction time (RT) in both word naming and picture naming tasks after training. These results suggest that neural pattern similarity is an effective neurofunctional predictor of novel word learning in addition to word memory. PMID:28878640

  20. Neural Pattern Similarity in the Left IFG and Fusiform Is Associated with Novel Word Learning

    Directory of Open Access Journals (Sweden)

    Jing Qu

    2017-08-01

    Full Text Available Previous studies have revealed that greater neural pattern similarity across repetitions is associated with better subsequent memory. In this study, we used an artificial language training paradigm and representational similarity analysis to examine whether neural pattern similarity across repetitions before training was associated with post-training behavioral performance. Twenty-four native Chinese speakers were trained to learn a logographic artificial language for 12 days and behavioral performance was recorded using the word naming and picture naming tasks. Participants were scanned while performing a passive viewing task before training, after 4-day training and after 12-day training. Results showed that pattern similarity in the left pars opercularis (PO and fusiform gyrus (FG before training was negatively associated with reaction time (RT in both word naming and picture naming tasks after training. These results suggest that neural pattern similarity is an effective neurofunctional predictor of novel word learning in addition to word memory.

  1. LEARNING ALGORITHM EFFECT ON MULTILAYER FEED FORWARD ARTIFICIAL NEURAL NETWORK PERFORMANCE IN IMAGE CODING

    Directory of Open Access Journals (Sweden)

    OMER MAHMOUD

    2007-08-01

    Full Text Available One of the essential factors that affect the performance of Artificial Neural Networks is the learning algorithm. The performance of Multilayer Feed Forward Artificial Neural Network performance in image compression using different learning algorithms is examined in this paper. Based on Gradient Descent, Conjugate Gradient, Quasi-Newton techniques three different error back propagation algorithms have been developed for use in training two types of neural networks, a single hidden layer network and three hidden layers network. The essence of this study is to investigate the most efficient and effective training methods for use in image compression and its subsequent applications. The obtained results show that the Quasi-Newton based algorithm has better performance as compared to the other two algorithms.

  2. The role of plastic changes in the motor cortex and spinal cord for motor learning

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo; Lundbye-Jensen, Jesper

    2010-01-01

    Adaptive changes of the efficacy of neural circuitries at different sites of the central nervous system is the basis of acquisition of new motor skills. Non-invasive human imaging and electrophysiological experiments have demonstrated that the primary motor cortex and spinal cord circuitries...... are key players in the early stages of skill acquisition and consolidation of motor learning. Expansion of the cortical representation of the trained muscles, changes in corticomuscular coupling and changes in stretch reflex activity are thus all markers of neuroplastic changes accompanying early skill...... acquisition. We have shown in recent experiments that sensory feedback from the active muscles play a surprisingly specific role at this stage of learning. Following motor skill training, repeated activation of sensory afferents from the muscle that has been involved in a previous training session, interfered...

  3. Ventral Tegmental Area and Substantia Nigra Neural Correlates of Spatial Learning

    Science.gov (United States)

    Martig, Adria K.; Mizumori, Sheri J. Y.

    2011-01-01

    The ventral tegmental area (VTA) and substantia nigra pars compacta (SNc) may provide modulatory signals that, respectively, influence hippocampal (HPC)- and striatal-dependent memory. Electrophysiological studies investigating neural correlates of learning and memory of dopamine (DA) neurons during classical conditioning tasks have found DA…

  4. A Closer Look at Deep Learning Neural Networks with Low-level Spectral Periodicity Features

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Kereliuk, Corey; Pikrakis, Aggelos

    2014-01-01

    Systems built using deep learning neural networks trained on low-level spectral periodicity features (DeSPerF) reproduced the most “ground truth” of the systems submitted to the MIREX 2013 task, “Audio Latin Genre Classification.” To answer why this was the case, we take a closer look...

  5. Identifying beneficial task relations for multi-task learning in deep neural networks

    DEFF Research Database (Denmark)

    Bingel, Joachim; Søgaard, Anders

    2017-01-01

    Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP...

  6. Hyperresponsiveness of the Neural Fear Network During Fear Conditioning and Extinction Learning in Male Cocaine Users

    NARCIS (Netherlands)

    Kaag, A.M.; Levar, N.; Woutersen, K.; Homberg, J.R.; Brink, W. van den; Reneman, L.; Wingen, G. van

    2016-01-01

    OBJECTIVE: The authors investigated whether cocaine use disorder is associated with abnormalities in the neural underpinnings of aversive conditioning and extinction learning, as these processes may play an important role in the development and persistence of drug abuse. METHOD: Forty male regular

  7. Learning behavior and temporary minima of two-layer neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the occurrence of temporary minima during training of a single-output, two-layer neural network, with learning according to the back-propagation algorithm. A new vector decomposition method is introduced, which simplifies the mathematical analysis of

  8. The interchangeability of learning rate and gain in backpropagation neural networks

    NARCIS (Netherlands)

    Thimm, G.; Moerland, P.; Fiesler, E.

    1996-01-01

    The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights.

  9. The neural coding of feedback learning across child and adolescent development

    NARCIS (Netherlands)

    Peters, S.; Braams, B.R.; Raijmakers, M.E.J.; Koolschijn, P.C.M.P.; Crone, E.A.

    2014-01-01

    The ability to learn from environmental cues is an important contributor to successful performance in a variety of settings, including school. Despite the progress in unraveling the neural correlates of cognitive control in childhood and adolescence, relatively little is known about how these brain

  10. Learning Errors by Radial Basis Function Neural Networks and Regularization Networks

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman; Vidnerová, Petra

    2009-01-01

    Roč. 1, č. 2 (2009), s. 49-57 ISSN 2005-4262 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : neural network * RBF networks * regularization * learning Subject RIV: IN - Informatics, Computer Science http://www.sersc.org/journals/IJGDC/vol2_no1/5.pdf

  11. Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them

    Science.gov (United States)

    Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.

    2011-01-01

    Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…

  12. Consensus-based distributed cooperative learning from closed-loop neural control systems.

    Science.gov (United States)

    Chen, Weisheng; Hua, Shaoyong; Zhang, Huaguang

    2015-02-01

    In this paper, the neural tracking problem is addressed for a group of uncertain nonlinear systems where the system structures are identical but the reference signals are different. This paper focuses on studying the learning capability of neural networks (NNs) during the control process. First, we propose a novel control scheme called distributed cooperative learning (DCL) control scheme, by establishing the communication topology among adaptive laws of NN weights to share their learned knowledge online. It is further proved that if the communication topology is undirected and connected, all estimated weights of NNs can converge to small neighborhoods around their optimal values over a domain consisting of the union of all state orbits. Second, as a corollary it is shown that the conclusion on the deterministic learning still holds in the decentralized adaptive neural control scheme where, however, the estimated weights of NNs just converge to small neighborhoods of the optimal values along their own state orbits. Thus, the learned controllers obtained by DCL scheme have the better generalization capability than ones obtained by decentralized learning method. A simulation example is provided to verify the effectiveness and advantages of the control schemes proposed in this paper.

  13. Developmental song learning as a model to understand neural mechanisms that limit and promote the ability to learn.

    Science.gov (United States)

    London, Sarah E

    2017-11-20

    Songbirds famously learn their vocalizations. Some species can learn continuously, others seasonally, and still others just once. The zebra finch (Taeniopygia guttata) learns to sing during a single developmental "Critical Period," a restricted phase during which a specific experience has profound and permanent effects on brain function and behavioral patterns. The zebra finch can therefore provide fundamental insight into features that promote and limit the ability to acquire complex learned behaviors. For example, what properties permit the brain to come "on-line" for learning? How does experience become encoded to prevent future learning? What features define the brain in receptive compared to closed learning states? This piece will focus on epigenomic, genomic, and molecular levels of analysis that operate on the timescales of development and complex behavioral learning. Existing data will be discussed as they relate to Critical Period learning, and strategies for future studies to more directly address these questions will be considered. Birdsong learning is a powerful model for advancing knowledge of the biological intersections of maturation and experience. Lessons from its study not only have implications for understanding developmental song learning, but also broader questions of learning potential and the enduring effects of early life experience on neural systems and behavior. Copyright © 2017. Published by Elsevier B.V.

  14. Recruitment and Consolidation of Cell Assemblies for Words by Way of Hebbian Learning and Competition in a Multi-Layer Neural Network.

    Science.gov (United States)

    Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann

    2009-06-01

    Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly's halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support.

  15. Learning, memory and synaptic plasticity in hippocampus in rats exposed to sevoflurane.

    Science.gov (United States)

    Xiao, Hongyan; Liu, Bing; Chen, Yali; Zhang, Jun

    2016-02-01

    Developmental exposure to volatile anesthetics has been associated with cognitive deficits at adulthood. Rodent studies have revealed impairments in performance in learning tasks involving the hippocampus. However, how the duration of anesthesia exposure impact on hippocampal synaptic plasticity, learning, and memory is as yet not fully elucidated. On postnatal day 7(P7), rat pups were divided into 3 groups: control group (n=30), 3% sevoflurane treatment for 1h (Sev 1h group, n=30) and 3% sevoflurane treatment for 6h (Sev 6h group, n=28). Following anesthesia, synaptic vesicle-associated proteins and dendrite spine density and synapse ultrastructure were measured using western blotting, Golgi staining, and transmission electron microscopy (TEM) on P21. In addition, the effects of sevoflurane treatment on long-term potentiation (LTP) and long-term depression (LTD), two molecular correlates of memory, were studied in CA1 subfields of the hippocampus, using electrophysiological recordings of field potentials in hippocampal slices on P35-42. Rats' neurocognitive performance was assessed at 2 months of age, using the Morris water maze and novel-object recognition tasks. Our results showed that neonatal exposure to 3% sevoflurane for 6h results in reduced spine density of apical dendrites along with elevated expression of synaptic vesicle-associated proteins (SNAP-25 and syntaxin), and synaptic ultrastructure damage in the hippocampus. The electrophysiological evidence indicated that hippocampal LTP, but not LTD, was inhibited and that learning and memory performance were impaired in two behavioral tasks in the Sev 6h group. In contrast, lesser structural and functional damage in the hippocampus was observed in the Sev 1h group. Our data showed that 6-h exposure of the developing brain to 3% sevoflurane could result in synaptic plasticity impairment in the hippocampus and spatial and nonspatial hippocampal-dependent learning and memory deficits. In contrast, shorter

  16. Vicarious Neural Processing of Outcomes during Observational Learning

    NARCIS (Netherlands)

    Monfardini, Elisabetta; Gazzola, Valeria; Boussaoud, Driss; Brovelli, Andrea; Keysers, Christian; Wicker, Bruno

    2013-01-01

    Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on

  17. Learning Orthographic Structure with Sequential Generative Neural Networks

    Science.gov (United States)

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-01-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in…

  18. A Newton-type neural network learning algorithm

    International Nuclear Information System (INIS)

    Ivanov, V.V.; Puzynin, I.V.; Purehvdorzh, B.

    1993-01-01

    First- and second-order learning methods for feed-forward multilayer networks are considered. A Newton-type algorithm is proposed and compared with the common back-propagation algorithm. It is shown that the proposed algorithm provides better learning quality. Some recommendations for their usage are given. 11 refs.; 1 fig.; 1 tab

  19. Time to rethink the neural mechanisms of learning and memory.

    Science.gov (United States)

    Gallistel, Charles R; Balsam, Peter D

    2014-02-01

    Most studies in the neurobiology of learning assume that the underlying learning process is a pairing - dependent change in synaptic strength that requires repeated experience of events presented in close temporal contiguity. However, much learning is rapid and does not depend on temporal contiguity, which has never been precisely defined. These points are well illustrated by studies showing that the temporal relations between events are rapidly learned- even over long delays- and that this knowledge governs the form and timing of behavior. The speed with which anticipatory responses emerge in conditioning paradigms is determined by the information that cues provide about the timing of rewards. The challenge for understanding the neurobiology of learning is to understand the mechanisms in the nervous system that encode information from even a single experience, the nature of the memory mechanisms that can encode quantities such as time, and how the brain can flexibly perform computations based on this information. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Parallelization of learning problems by artificial neural networks. Application in external radiotherapy

    International Nuclear Information System (INIS)

    Sauget, M.

    2007-12-01

    This research is about the application of neural networks used in the external radiotherapy domain. The goal is to elaborate a new evaluating system for the radiation dose distributions in heterogeneous environments. The al objective of this work is to build a complete tool kit to evaluate the optimal treatment planning. My st research point is about the conception of an incremental learning algorithm. The interest of my work is to combine different optimizations specialized in the function interpolation and to propose a new algorithm allowing to change the neural network architecture during the learning phase. This algorithm allows to minimise the al size of the neural network while keeping a good accuracy. The second part of my research is to parallelize the previous incremental learning algorithm. The goal of that work is to increase the speed of the learning step as well as the size of the learned dataset needed in a clinical case. For that, our incremental learning algorithm presents an original data decomposition with overlapping, together with a fault tolerance mechanism. My last research point is about a fast and accurate algorithm computing the radiation dose deposit in any heterogeneous environment. At the present time, the existing solutions used are not optimal. The fast solution are not accurate and do not give an optimal treatment planning. On the other hand, the accurate solutions are far too slow to be used in a clinical context. Our algorithm answers to this problem by bringing rapidity and accuracy. The concept is to use a neural network adequately learned together with a mechanism taking into account the environment changes. The advantages of this algorithm is to avoid the use of a complex physical code while keeping a good accuracy and reasonable computation times. (author)

  1. Sharpened cortical tuning and enhanced cortico-cortical communication contribute to the long-term neural mechanisms of visual motion perceptual learning.

    Science.gov (United States)

    Chen, Nihong; Bi, Taiyong; Zhou, Tiangang; Li, Sheng; Liu, Zili; Fang, Fang

    2015-07-15

    Much has been debated about whether the neural plasticity mediating perceptual learning takes place at the sensory or decision-making stage in the brain. To investigate this, we trained human subjects in a visual motion direction discrimination task. Behavioral performance and BOLD signals were measured before, immediately after, and two weeks after training. Parallel to subjects' long-lasting behavioral improvement, the neural selectivity in V3A and the effective connectivity from V3A to IPS (intraparietal sulcus, a motion decision-making area) exhibited a persistent increase for the trained direction. Moreover, the improvement was well explained by a linear combination of the selectivity and connectivity increases. These findings suggest that the long-term neural mechanisms of motion perceptual learning are implemented by sharpening cortical tuning to trained stimuli at the sensory processing stage, as well as by optimizing the connections between sensory and decision-making areas in the brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Olfactory learning without the mushroom bodies: Spiking neural network models of the honeybee lateral antennal lobe tract reveal its capacities in odour memory tasks of varied complexities.

    Science.gov (United States)

    MaBouDi, HaDi; Shimazaki, Hideaki; Giurfa, Martin; Chittka, Lars

    2017-06-01

    The honeybee olfactory system is a well-established model for understanding functional mechanisms of learning and memory. Olfactory stimuli are first processed in the antennal lobe, and then transferred to the mushroom body and lateral horn through dual pathways termed medial and lateral antennal lobe tracts (m-ALT and l-ALT). Recent studies reported that honeybees can perform elemental learning by associating an odour with a reward signal even after lesions in m-ALT or blocking the mushroom bodies. To test the hypothesis that the lateral pathway (l-ALT) is sufficient for elemental learning, we modelled local computation within glomeruli in antennal lobes with axons of projection neurons connecting to a decision neuron (LHN) in the lateral horn. We show that inhibitory spike-timing dependent plasticity (modelling non-associative plasticity by exposure to different stimuli) in the synapses from local neurons to projection neurons decorrelates the projection neurons' outputs. The strength of the decorrelations is regulated by global inhibitory feedback within antennal lobes to the projection neurons. By additionally modelling octopaminergic modification of synaptic plasticity among local neurons in the antennal lobes and projection neurons to LHN connections, the model can discriminate and generalize olfactory stimuli. Although positive patterning can be accounted for by the l-ALT model, negative patterning requires further processing and mushroom body circuits. Thus, our model explains several-but not all-types of associative olfactory learning and generalization by a few neural layers of odour processing in the l-ALT. As an outcome of the combination between non-associative and associative learning, the modelling approach allows us to link changes in structural organization of honeybees' antennal lobes with their behavioural performances over the course of their life.

  3. Dynamic Learning from Adaptive Neural Control of Uncertain Robots with Guaranteed Full-State Tracking Precision

    Directory of Open Access Journals (Sweden)

    Min Wang

    2017-01-01

    Full Text Available A dynamic learning method is developed for an uncertain n-link robot with unknown system dynamics, achieving predefined performance attributes on the link angular position and velocity tracking errors. For a known nonsingular initial robotic condition, performance functions and unconstrained transformation errors are employed to prevent the violation of the full-state tracking error constraints. By combining two independent Lyapunov functions and radial basis function (RBF neural network (NN approximator, a novel and simple adaptive neural control scheme is proposed for the dynamics of the unconstrained transformation errors, which guarantees uniformly ultimate boundedness of all the signals in the closed-loop system. In the steady-state control process, RBF NNs are verified to satisfy the partial persistent excitation (PE condition. Subsequently, an appropriate state transformation is adopted to achieve the accurate convergence of neural weight estimates. The corresponding experienced knowledge on unknown robotic dynamics is stored in NNs with constant neural weight values. Using the stored knowledge, a static neural learning controller is developed to improve the full-state tracking performance. A comparative simulation study on a 2-link robot illustrates the effectiveness of the proposed scheme.

  4. Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System

    Science.gov (United States)

    Williams-Hayes, Peggy S.

    2004-01-01

    The NASA F-15 Intelligent Flight Control System project team developed a series of flight control concepts designed to demonstrate neural network-based adaptive controller benefits, with the objective to develop and flight-test control systems using neural network technology to optimize aircraft performance under nominal conditions and stabilize the aircraft under failure conditions. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to baseline aerodynamic derivatives in flight. This open-loop flight test set was performed in preparation for a future phase in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed - pitch frequency sweep and automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. Flight data examination shows that addition of flight-identified aerodynamic derivative increments into the simulation improved aircraft pitch handling qualities.

  5. Picasso: A Modular Framework for Visualizing the Learning Process of Neural Network Image Classifiers

    Directory of Open Access Journals (Sweden)

    Ryan Henderson

    2017-09-01

    Full Text Available Picasso is a free open-source (Eclipse Public License web application written in Python for rendering standard visualizations useful for analyzing convolutional neural networks. Picasso ships with occlusion maps and saliency maps, two visualizations which help reveal issues that evaluation metrics like loss and accuracy might hide: for example, learning a proxy classification task. Picasso works with the Tensorflow deep learning framework, and Keras (when the model can be loaded into the Tensorflow backend. Picasso can be used with minimal configuration by deep learning researchers and engineers alike across various neural network architectures. Adding new visualizations is simple: the user can specify their visualization code and HTML template separately from the application code.

  6. Probabilistic neural network playing and learning Tic-Tac-Toe

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Somol, Petr; Pudil, Pavel

    2005-01-01

    Roč. 26, č. 12 (2005), s. 1866-1873 ISSN 0167-8655 R&D Projects: GA ČR GA402/02/1271; GA ČR GA402/03/1310; GA MŠk 1M0572 Grant - others:Comission EU(XE) FP6-507772 Institutional research plan: CEZ:AV0Z10750506 Keywords : neural networks * distribution mixtures * playing game s Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.138, year: 2005

  7. Biologically-inspired On-chip Learning in Pulsed Neural Networks

    DEFF Research Database (Denmark)

    Lehmann, Torsten; Woodburn, Robin

    1999-01-01

    Self-learning chips to implement many popular ANN (artificial neural network) algorithms are very difficult to design. We explain why this is so and say what lessons previous work teaches us in the design of self-learning systems. We offer a contribution to the "biologically-inspired" approach......, explaining what we mean by this term and providing an example of a robust, self-learning design that can solve simple classical-conditioning tasks, We give details of the design of individual circuits to perform component functions, which can then be combined into a network to solve the task. We argue...

  8. Three-dimensional neural net for learning visuomotor coordination of a robot arm.

    Science.gov (United States)

    Martinetz, T M; Ritter, H J; Schulten, K J

    1990-01-01

    An extension of T. Kohonen's (1982) self-organizing mapping algorithm together with an error-correction scheme based on the Widrow-Hoff learning rule is applied to develop a learning algorithm for the visuomotor coordination of a simulated robot arm. Learning occurs by a sequence of trial movements without the need for an external teacher. Using input signals from a pair of cameras, the closed robot arm system is able to reduce its positioning error to about 0.3% of the linear dimensions of its work space. This is achieved by choosing the connectivity of a three-dimensional lattice consisting of the units of the neural net.

  9. The neural basis of reversal learning: An updated perspective

    Science.gov (United States)

    Izquierdo, Alicia; Brigman, Jonathan L.; Radke, Anna K.; Rudebeck, Peter H.; Holmes, Andrew

    2016-01-01

    Reversal learning paradigms are among the most widely used tests of cognitive flexibility and have been used as assays, across species, for altered cognitive processes in a host of neuropsychiatric conditions. Based on recent studies in humans, non-human primates, and rodents, the notion that reversal learning tasks primarily measure response inhibition, has been revised. In this review, we describe how cognitive flexibility is measured by reversal learning and discuss new definitions of the construct validity of the task that are serving as an heuristic to guide future research in this field. We also provide an update on the available evidence implicating certain cortical and subcortical brain regions in the mediation of reversal learning, and an overview of the principle neurotransmitter systems involved. PMID:26979052

  10. Network Enabled - Unresolved Residual Analysis and Learning (NEURAL)

    Science.gov (United States)

    Temple, D.; Poole, M.; Camp, M.

    Since the advent of modern computational capacity, machine learning algorithms and techniques have served as a method through which to solve numerous challenging problems. However, for machine learning methods to be effective and robust, sufficient data sets must be available; specifically, in the space domain, these are generally difficult to acquire. Rapidly evolving commercial space-situational awareness companies boast the capability to collect hundreds of thousands nightly observations of resident space objects (RSOs) using a ground-based optical sensor network. This provides the ability to maintain custody of and characterize thousands of objects persistently. With this information available, novel deep learning techniques can be implemented. The technique discussed in this paper utilizes deep learning to make distinctions between nightly data collects with and without maneuvers. Implementation of these techniques will allow the data collected from optical ground-based networks to enable well informed and timely the space domain decision making.

  11. "FORCE" learning in recurrent neural networks as data assimilation

    Science.gov (United States)

    Duane, Gregory S.

    2017-12-01

    It is shown that the "FORCE" algorithm for learning in arbitrarily connected networks of simple neuronal units can be cast as a Kalman Filter, with a particular state-dependent form for the background error covariances. The resulting interpretation has implications for initialization of the learning algorithm, leads to an extension to include interactions between the weight updates for different neurons, and can represent relationships within groups of multiple target output signals.

  12. Learning Based on CC1 and CC4 Neural Networks

    OpenAIRE

    Kak, Subhash

    2017-01-01

    We propose that a general learning system should have three kinds of agents corresponding to sensory, short-term, and long-term memory that implicitly will facilitate context-free and context-sensitive aspects of learning. These three agents perform mututally complementary functions that capture aspects of the human cognition system. We investigate the use of CC1 and CC4 networks for use as models of short-term and sensory memory.

  13. Explaining neural signals in human visual cortex with an associative learning model.

    Science.gov (United States)

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  14. Learning Perfectly Secure Cryptography to Protect Communications with Adversarial Neural Cryptography

    Directory of Open Access Journals (Sweden)

    Murilo Coutinho

    2018-04-01

    Full Text Available Researches in Artificial Intelligence (AI have achieved many important breakthroughs, especially in recent years. In some cases, AI learns alone from scratch and performs human tasks faster and better than humans. With the recent advances in AI, it is natural to wonder whether Artificial Neural Networks will be used to successfully create or break cryptographic algorithms. Bibliographic review shows the main approach to this problem have been addressed throughout complex Neural Networks, but without understanding or proving the security of the generated model. This paper presents an analysis of the security of cryptographic algorithms generated by a new technique called Adversarial Neural Cryptography (ANC. Using the proposed network, we show limitations and directions to improve the current approach of ANC. Training the proposed Artificial Neural Network with the improved model of ANC, we show that artificially intelligent agents can learn the unbreakable One-Time Pad (OTP algorithm, without human knowledge, to communicate securely through an insecure communication channel. This paper shows in which conditions an AI agent can learn a secure encryption scheme. However, it also shows that, without a stronger adversary, it is more likely to obtain an insecure one.

  15. Learning Perfectly Secure Cryptography to Protect Communications with Adversarial Neural Cryptography.

    Science.gov (United States)

    Coutinho, Murilo; de Oliveira Albuquerque, Robson; Borges, Fábio; García Villalba, Luis Javier; Kim, Tai-Hoon

    2018-04-24

    Researches in Artificial Intelligence (AI) have achieved many important breakthroughs, especially in recent years. In some cases, AI learns alone from scratch and performs human tasks faster and better than humans. With the recent advances in AI, it is natural to wonder whether Artificial Neural Networks will be used to successfully create or break cryptographic algorithms. Bibliographic review shows the main approach to this problem have been addressed throughout complex Neural Networks, but without understanding or proving the security of the generated model. This paper presents an analysis of the security of cryptographic algorithms generated by a new technique called Adversarial Neural Cryptography (ANC). Using the proposed network, we show limitations and directions to improve the current approach of ANC. Training the proposed Artificial Neural Network with the improved model of ANC, we show that artificially intelligent agents can learn the unbreakable One-Time Pad (OTP) algorithm, without human knowledge, to communicate securely through an insecure communication channel. This paper shows in which conditions an AI agent can learn a secure encryption scheme. However, it also shows that, without a stronger adversary, it is more likely to obtain an insecure one.

  16. Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks.

    Science.gov (United States)

    Gardner, Brian; Sporea, Ioana; Grüning, André

    2015-12-01

    Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we examine how multilayered networks of spiking neurons can learn to encode for input patterns using a fully temporal coding scheme. To this end, we introduce a new supervised learning rule, MultilayerSpiker, that can train spiking networks containing hidden layer neurons to perform transformations between spatiotemporal input and output spike patterns. The performance of the proposed learning rule is demonstrated in terms of the number of pattern mappings it can learn, the complexity of network structures it can be used on, and its classification accuracy when using multispike-based encodings. In particular, the learning rule displays robustness against input noise and can generalize well on an example data set. Our approach contributes to both a systematic understanding of how computations might take place in the nervous system and a learning rule that displays strong technical capability.

  17. Learning quadratic receptive fields from neural responses to natural stimuli.

    Science.gov (United States)

    Rajan, Kanaka; Marre, Olivier; Tkačik, Gašper

    2013-07-01

    Models of neural responses to stimuli with complex spatiotemporal correlation structure often assume that neurons are selective for only a small number of linear projections of a potentially high-dimensional input. In this review, we explore recent modeling approaches where the neural response depends on the quadratic form of the input rather than on its linear projection, that is, the neuron is sensitive to the local covariance structure of the signal preceding the spike. To infer this quadratic dependence in the presence of arbitrary (e.g., naturalistic) stimulus distribution, we review several inference methods, focusing in particular on two information theory-based approaches (maximization of stimulus energy and of noise entropy) and two likelihood-based approaches (Bayesian spike-triggered covariance and extensions of generalized linear models). We analyze the formal relationship between the likelihood-based and information-based approaches to demonstrate how they lead to consistent inference. We demonstrate the practical feasibility of these procedures by using model neurons responding to a flickering variance stimulus.

  18. A Self-Organizing Incremental Neural Network based on local distribution learning.

    Science.gov (United States)

    Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi

    2016-12-01

    In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Dissociable neural representations of reinforcement and belief prediction errors underlie strategic learning.

    Science.gov (United States)

    Zhu, Lusha; Mathewson, Kyle E; Hsu, Ming

    2012-01-31

    Decision-making in the presence of other competitive intelligent agents is fundamental for social and economic behavior. Such decisions require agents to behave strategically, where in addition to learning about the rewards and punishments available in the environment, they also need to anticipate and respond to actions of others competing for the same rewards. However, whereas we know much about strategic learning at both theoretical and behavioral levels, we know relatively little about the underlying neural mechanisms. Here, we show using a multi-strategy competitive learning paradigm that strategic choices can be characterized by extending the reinforcement learning (RL) framework to incorporate agents' beliefs about the actions of their opponents. Furthermore, using this characterization to generate putative internal values, we used model-based functional magnetic resonance imaging to investigate neural computations underlying strategic learning. We found that the distinct notions of prediction errors derived from our computational model are processed in a partially overlapping but distinct set of brain regions. Specifically, we found that the RL prediction error was correlated with activity in the ventral striatum. In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning. These results suggest a model of strategic behavior where learning arises from interaction of dissociable reinforcement and belief-based inputs.

  20. Depression-biased reverse plasticity rule is required for stable learning at top-down connections.

    Directory of Open Access Journals (Sweden)

    Kendra S Burbank

    Full Text Available Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body.

  1. Depression-Biased Reverse Plasticity Rule Is Required for Stable Learning at Top-Down Connections

    Science.gov (United States)

    Burbank, Kendra S.; Kreiman, Gabriel

    2012-01-01

    Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body. PMID:22396630

  2. Neural classifiers for learning higher-order correlations

    International Nuclear Information System (INIS)

    Gueler, M.

    1999-01-01

    Studies by various authors suggest that higher-order networks can be more powerful and biologically more plausible with respect to the more traditional multilayer networks. These architecture make explicit use of nonlinear interactions between input variables in the form of higher-order units or product units. If it is known a priori that the problem to be implemented possesses a given set of invariances like in the translation, rotation, and scale invariant recognition problems, those invariances can be encoded, thus eliminating all higher-order terms which are incompatible with the invariances. In general, however, it is a serious set-back that the complexity of learning increases exponentially with the size of inputs. This paper reviews higher-order networks and introduces an implicit representation in which learning complexity is mainly decided by the number of higher-order terms to be learned and increases only linearly with the input size

  3. Neural Classifiers for Learning Higher-Order Correlations

    Science.gov (United States)

    Güler, Marifi

    1999-01-01

    Studies by various authors suggest that higher-order networks can be more powerful and are biologically more plausible with respect to the more traditional multilayer networks. These architectures make explicit use of nonlinear interactions between input variables in the form of higher-order units or product units. If it is known a priori that the problem to be implemented possesses a given set of invariances like in the translation, rotation, and scale invariant pattern recognition problems, those invariances can be encoded, thus eliminating all higher-order terms which are incompatible with the invariances. In general, however, it is a serious set-back that the complexity of learning increases exponentially with the size of inputs. This paper reviews higher-order networks and introduces an implicit representation in which learning complexity is mainly decided by the number of higher-order terms to be learned and increases only linearly with the input size.

  4. Bridging Cognitive And Neural Aspects Of Classroom Learning

    Science.gov (United States)

    Posner, Michael I.

    2009-11-01

    A major achievement of the first twenty years of neuroimaging is to reveal the brain networks that underlie fundamental aspects of attention, memory and expertise. We examine some principles underlying the activation of these networks. These networks represent key constraints for the design of teaching. Individual differences in these networks reflect a combination of genes and experiences. While acquiring expertise is easier for some than others the importance of effort in its acquisition is a basic principle. Networks are strengthened through exercise, but maintaining interest that produces sustained attention is key to making exercises successful. The state of the brain prior to learning may also represent an important constraint on successful learning and some interventions designed to investigate the role of attention state in learning are discussed. Teaching remains a creative act between instructor and student, but an understanding of brain mechanisms might improve opportunity for success for both participants.

  5. Neural stem cells and neuro/gliogenesis in the central nervous system: understanding the structural and functional plasticity of the developing, mature, and diseased brain.

    Science.gov (United States)

    Yamaguchi, Masahiro; Seki, Tatsunori; Imayoshi, Itaru; Tamamaki, Nobuaki; Hayashi, Yoshitaka; Tatebayashi, Yoshitaka; Hitoshi, Seiji

    2016-05-01

    Neurons and glia in the central nervous system (CNS) originate from neural stem cells (NSCs). Knowledge of the mechanisms of neuro/gliogenesis from NSCs is fundamental to our understanding of how complex brain architecture and function develop. NSCs are present not only in the developing brain but also in the mature brain in adults. Adult neurogenesis likely provides remarkable plasticity to the mature brain. In addition, recent progress in basic research in mental disorders suggests an etiological link with impaired neuro/gliogenesis in particular brain regions. Here, we review the recent progress and discuss future directions in stem cell and neuro/gliogenesis biology by introducing several topics presented at a joint meeting of the Japanese Association of Anatomists and the Physiological Society of Japan in 2015. Collectively, these topics indicated that neuro/gliogenesis from NSCs is a common event occurring in many brain regions at various ages in animals. Given that significant structural and functional changes in cells and neural networks are accompanied by neuro/gliogenesis from NSCs and the integration of newly generated cells into the network, stem cell and neuro/gliogenesis biology provides a good platform from which to develop an integrated understanding of the structural and functional plasticity that underlies the development of the CNS, its remodeling in adulthood, and the recovery from diseases that affect it.

  6. Learning new sequential stepping patterns requires striatal plasticity during the earliest phase of acquisition.

    Science.gov (United States)

    Nakamura, Toru; Nagata, Masatoshi; Yagi, Takeshi; Graybiel, Ann M; Yamamori, Tetsuo; Kitsukawa, Takashi

    2017-04-01

    Animals including humans execute motor behavior to reach their goals. For this purpose, they must choose correct strategies according to environmental conditions and shape many parameters of their movements, including their serial order and timing. To investigate the neurobiology underlying such skills, we used a multi-sensor equipped, motor-driven running wheel with adjustable sequences of foothold pegs on which mice ran to obtain water reward. When the peg patterns changed from a familiar pattern to a new pattern, the mice had to learn and implement new locomotor strategies in order to receive reward. We found that the accuracy of stepping and the achievement of water reward improved with the new learning after changes in the peg-pattern, and c-Fos expression levels assayed after the first post-switch session were high in both dorsolateral striatum and motor cortex, relative to post-switch plateau levels. Combined in situ hybridization and immunohistochemistry of striatal sections demonstrated that both enkephalin-positive (indirect pathway) neurons and substance P-positive (direct pathway) neurons were recruited specifically after the pattern switches, as were interneurons expressing neuronal nitric oxide synthase. When we blocked N-methyl-D-aspartate (NMDA) receptors in the dorsolateral striatum by injecting the NMDA receptor antagonist, D-2-amino-5-phosphonopentanoic acid (AP5), we found delays in early post-switch improvement in performance. These findings suggest that the dorsolateral striatum is activated on detecting shifts in environment to adapt motor behavior to the new context via NMDA-dependent plasticity, and that this plasticity may underlie forming and breaking skills and habits as well as to behavioral difficulties in clinical disorders. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model

    Science.gov (United States)

    Panda, Priyadarshini; Srinivasa, Narayan

    2018-01-01

    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models. PMID:29551962

  8. Have we met before? Neural correlates of emotional learning in women with social phobia.

    Science.gov (United States)

    Laeger, Inga; Keuper, Kati; Heitmann, Carina; Kugel, Harald; Dobel, Christian; Eden, Annuschka; Arolt, Volker; Zwitserlood, Pienie; Dannlowski, Udo; Zwanzger, Peter

    2014-05-01

    Altered memory processes are thought to be a key mechanism in the etiology of anxiety disorders, but little is known about the neural correlates of fear learning and memory biases in patients with social phobia. The present study therefore examined whether patients with social phobia exhibit different patterns of neural activation when confronted with recently acquired emotional stimuli. Patients with social phobia and a group of healthy controls learned to associate pseudonames with pictures of persons displaying either a fearful or a neutral expression. The next day, participants read the pseudonames in the magnetic resonance imaging scanner. Afterwards, 2 memory tests were carried out. We enrolled 21 patients and 21 controls in our study. There were no group differences for learning performance, and results of the memory tests were mixed. On a neural level, patients showed weaker amygdala activation than controls for the contrast of names previously associated with fearful versus neutral faces. Social phobia severity was negatively related to amygdala activation. Moreover, a detailed psychophysiological interaction analysis revealed an inverse correlation between disorder severity and frontolimbic connectivity for the emotional > neutral pseudonames contrast. Our sample included only women. Our results support the theory of a disturbed cortico limbic interplay, even for recently learned emotional stimuli. We discuss the findings with regard to the vigilance-avoidance theory and contrast them to results indicating an oversensitive limbic system in patients with social phobia.

  9. Learning representations for the early detection of sepsis with deep neural networks.

    Science.gov (United States)

    Kam, Hye Jin; Kim, Ha Young

    2017-10-01

    Sepsis is one of the leading causes of death in intensive care unit patients. Early detection of sepsis is vital because mortality increases as the sepsis stage worsens. This study aimed to develop detection models for the early stage of sepsis using deep learning methodologies, and to compare the feasibility and performance of the new deep learning methodology with those of the regression method with conventional temporal feature extraction. Study group selection adhered to the InSight model. The results of the deep learning-based models and the InSight model were compared. With deep feedforward networks, the area under the ROC curve (AUC) of the models were 0.887 and 0.915 for the InSight and the new feature sets, respectively. For the model with the combined feature set, the AUC was the same as that of the basic feature set (0.915). For the long short-term memory model, only the basic feature set was applied and the AUC improved to 0.929 compared with the existing 0.887 of the InSight model. The contributions of this paper can be summarized in three ways: (i) improved performance without feature extraction using domain knowledge, (ii) verification of feature extraction capability of deep neural networks through comparison with reference features, and (iii) improved performance with feedforward neural networks using long short-term memory, a neural network architecture that can learn sequential patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A central role for the small GTPase Rac1 in hippocampal plasticity and spatial learning and memory

    DEFF Research Database (Denmark)

    Haditsch, Ursula; Leone, Dino P; Farinelli, Mélissa

    2009-01-01

    in excitatory neurons in the forebrain in vivo not only affects spine structure, but also impairs synaptic plasticity in the hippocampus with consequent defects in hippocampus-dependent spatial learning. Furthermore, Rac1 mutants display deficits in working/episodic-like memory in the delayed matching...

  11. Learning Discloses Abnormal Structural and Functional Plasticity at Hippocampal Synapses in the APP23 Mouse Model of Alzheimer's Disease

    Science.gov (United States)

    Middei, Silvia; Roberto, Anna; Berretta, Nicola; Panico, Maria Beatrice; Lista, Simone; Bernardi, Giorgio; Mercuri, Nicola B.; Ammassari-Teule, Martine; Nistico, Robert

    2010-01-01

    B6-Tg/Thy1APP23Sdz (APP23) mutant mice exhibit neurohistological hallmarks of Alzheimer's disease but show intact basal hippocampal neurotransmission and synaptic plasticity. Here, we examine whether spatial learning differently modifies the structural and electrophysiological properties of hippocampal synapses in APP23 and wild-type mice. While…

  12. Plasticity in the Drosophila larval visual System

    Directory of Open Access Journals (Sweden)

    Abud J Farca-Luna

    2013-07-01

    Full Text Available The remarkable ability of the nervous system to modify its structure and function is mostly experience and activity modulated. The molecular basis of neuronal plasticity has been studied in higher behavioral processes, such as learning and memory formation. However, neuronal plasticity is not restricted to higher brain functions, but may provide a basic feature of adaptation of all neural circuits. The fruit fly Drosophila melanogaster provides a powerful genetic model to gain insight into the molecular basis of nervous system development and function. The nervous system of the larvae is again a magnitude simpler than its adult counter part, allowing the genetic assessment of a number of individual genetically identifiable neurons. We review here recent progress on the genetic basis of neuronal plasticity in developing and functioning neural circuits focusing on the simple visual system of the Drosophila larva.

  13. From phonemes to images : levels of representation in a recurrent neural model of visually-grounded language learning

    NARCIS (Netherlands)

    Gelderloos, L.J.; Chrupala, Grzegorz

    2016-01-01

    We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover

  14. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection.

    Science.gov (United States)

    Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho

    2017-03-01

    Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Deep neural networks for direct, featureless learning through observation: The case of two-dimensional spin models

    Science.gov (United States)

    Mills, Kyle; Tamblyn, Isaac

    2018-03-01

    We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4 ×4 Ising model. Using its success at this task, we motivate the study of the larger 8 ×8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration.

  16. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    Science.gov (United States)

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  17. Differential neural substrates of working memory and cognitive skill learning in healthy young volunteers

    International Nuclear Information System (INIS)

    Cho, Sang Soo; Lee, Eun Ju; Yoon, Eun Jin; Kim, Yu Kyeong; Lee, Won Woo; Kim, Sang Eun

    2005-01-01

    It is known that different neural circuits are involved in working memory and cognitive skill learning that represent explicit and implicit memory functions, respectively. In the present study, we investigated the metabolic correlates of working memory and cognitive skill learning with correlation analysis of FDG PET images. Fourteen right-handed healthy subjects (age, 24 ± 2 yr; 5 males and 9 females) underwent brain FDG PET and neuropsychological testing. Two-back task and weather prediction task were used for the evaluation of working memory and cognitive skill learning, respectively, Correlation between regional glucose metabolism and cognitive task performance was examined using SPM99. A significant positive correlation between 2-back task performance and regional glucose metabolism was found in the prefrontal regions and superior temporal gyri bilaterally. In the first term of weather prediction task the task performance correlated positively with glucose metabolism in the bilateral prefrontal areas, left middle temporal and posterior cingulate gyri, and left thalamus. In the second and third terms of the task, the correlation found in the prefrontal areas, superior temporal and anterior cingulate gyri bilaterally, right insula, left parahippocampal gyrus, and right caudate nucleus. We identified the neural substrates that are related with performance of working memory and cognitive skill learning. These results indicate that brain regions associated with the explicit memory system are recruited in early periods of cognitive skill learning, but additional brain regions including caudate nucleus are involved in late periods of cognitive skill learning

  18. Differential neural substrates of working memory and cognitive skill learning in healthy young volunteers

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sang Soo; Lee, Eun Ju; Yoon, Eun Jin; Kim, Yu Kyeong; Lee, Won Woo; Kim, Sang Eun [Seoul National Univ. College of Medicine, Seoul (Korea, Republic of)

    2005-07-01

    It is known that different neural circuits are involved in working memory and cognitive skill learning that represent explicit and implicit memory functions, respectively. In the present study, we investigated the metabolic correlates of working memory and cognitive skill learning with correlation analysis of FDG PET images. Fourteen right-handed healthy subjects (age, 24 {+-} 2 yr; 5 males and 9 females) underwent brain FDG PET and neuropsychological testing. Two-back task and weather prediction task were used for the evaluation of working memory and cognitive skill learning, respectively, Correlation between regional glucose metabolism and cognitive task performance was examined using SPM99. A significant positive correlation between 2-back task performance and regional glucose metabolism was found in the prefrontal regions and superior temporal gyri bilaterally. In the first term of weather prediction task the task performance correlated positively with glucose metabolism in the bilateral prefrontal areas, left middle temporal and posterior cingulate gyri, and left thalamus. In the second and third terms of the task, the correlation found in the prefrontal areas, superior temporal and anterior cingulate gyri bilaterally, right insula, left parahippocampal gyrus, and right caudate nucleus. We identified the neural substrates that are related with performance of working memory and cognitive skill learning. These results indicate that brain regions associated with the explicit memory system are recruited in early periods of cognitive skill learning, but additional brain regions including caudate nucleus are involved in late periods of cognitive skill learning.

  19. P2X7 Receptors Drive Spine Synapse Plasticity in the Learned Helplessness Model of Depression.

    Science.gov (United States)

    Otrokocsi, Lilla; Kittel, Ágnes; Sperlágh, Beáta

    2017-10-01

    Major depressive disorder is characterized by structural and functional abnormalities of cortical and limbic brain areas, including a decrease in spine synapse number in the dentate gyrus of the hippocampus. Recent studies highlighted that both genetic and pharmacological invalidation of the purinergic P2X7 receptor (P2rx7) leads to antidepressant-like phenotype in animal experiments; however, the impact of P2rx7 on depression-related structural changes in the hippocampus is not clarified yet. Effects of genetic deletion of P2rx7s on depressive-like behavior and spine synapse density in the dentate gyrus were investigated using the learned helplessness mouse model of depression. We demonstrate that in wild-type animals, inescapable footshocks lead to learned helplessness behavior reflected in increased latency and number of escape failures to subsequent escapable footshocks. This behavior is accompanied with downregulation of mRNA encoding P2rx7 and decrease of spine synapse density in the dentate gyrus as determined by electron microscopic stereology. In addition, a decrease in synaptopodin but not in PSD95 and NR2B/GluN2B protein level was also observed under these conditions. Whereas the absence of P2rx7 was characterized by escape deficit, no learned helpless behavior is observed in these animals. Likewise, no decrease in spine synapse number and synaptopodin protein levels was detected in response to inescapable footshocks in P2rx7-deficient animals. Our findings suggest the endogenous activation of P2rx7s in the learned helplessness model of depression and decreased plasticity of spine synapses in P2rx7-deficient mice might explain the resistance of these animals to repeated stressful stimuli. © The Author 2017. Published by Oxford University Press on behalf of CINP.

  20. Learning to read words in a new language shapes the neural organization of the prior languages.

    Science.gov (United States)

    Mei, Leilei; Xue, Gui; Lu, Zhong-Lin; Chen, Chuansheng; Zhang, Mingxia; He, Qinghua; Wei, Miao; Dong, Qi

    2014-12-01

    Learning a new language entails interactions with one׳s prior language(s). Much research has shown how native language affects the cognitive and neural mechanisms of a new language, but little is known about whether and how learning a new language shapes the neural mechanisms of prior language(s). In two experiments in the current study, we used an artificial language training paradigm in combination with an fMRI to examine (1) the effects of different linguistic components (phonology and semantics) of a new language on the neural process of prior languages (i.e., native and second languages), and (2) whether such effects were modulated by the proficiency level in the new language. Results of Experiment 1 showed that when the training in a new language involved semantics (as opposed to only visual forms and phonology), neural activity during word reading in the native language (Chinese) was reduced in several reading-related regions, including the left pars opercularis, pars triangularis, bilateral inferior temporal gyrus, fusiform gyrus, and inferior occipital gyrus. Results of Experiment 2 replicated the results of Experiment 1 and further found that semantic training also affected neural activity during word reading in the subjects׳ second language (English). Furthermore, we found that the effects of the new language were modulated by the subjects׳ proficiency level in the new language. These results provide critical imaging evidence for the influence of learning to read words in a new language on word reading in native and second languages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Adaptive neural network/expert system that learns fault diagnosis for different structures

    Science.gov (United States)

    Simon, Solomon H.

    1992-08-01

    Corporations need better real-time monitoring and control systems to improve productivity by watching quality and increasing production flexibility. The innovative technology to achieve this goal is evolving in the form artificial intelligence and neural networks applied to sensor processing, fusion, and interpretation. By using these advanced Al techniques, we can leverage existing systems and add value to conventional techniques. Neural networks and knowledge-based expert systems can be combined into intelligent sensor systems which provide real-time monitoring, control, evaluation, and fault diagnosis for production systems. Neural network-based intelligent sensor systems are more reliable because they can provide continuous, non-destructive monitoring and inspection. Use of neural networks can result in sensor fusion and the ability to model highly, non-linear systems. Improved models can provide a foundation for more accurate performance parameters and predictions. We discuss a research software/hardware prototype which integrates neural networks, expert systems, and sensor technologies and which can adapt across a variety of structures to perform fault diagnosis. The flexibility and adaptability of the prototype in learning two structures is presented. Potential applications are discussed.

  2. Age-related difference in the effective neural connectivity associated with probabilistic category learning

    International Nuclear Information System (INIS)

    Yoon, Eun Jin; Cho, Sang Soo; Kim, Hee Jung; Bang, Seong Ae; Park, Hyun Soo; Kim, Yu Kyeong; Kim, Sang Eun

    2007-01-01

    Although it is well known that explicit memory is affected by the deleterious changes in brain with aging, but effect of aging in implicit memory such as probabilistic category learning (PCL) is not clear. To identify the effect of aging on the neural interaction for successful PCL, we investigated the neural substrates of PCL and the age-related changes of the neural network between these brain regions. 23 young (age, 252 y; 11 males) and 14 elderly (673 y; 7 males) healthy subjects underwent FDG PET during a resting state and 150-trial weather prediction (WP) task. Correlations between the WP hit rates and regional glucose metabolism were assessed using SPM2 (P diff (37) = 142.47, P<0.005), Systematic comparisons of each path revealed that frontal crosscallosal and the frontal to parahippocampal connection were most responsible for the model differences (P<0.05). For the successful PCL, the elderly recruits the basal ganglia implicit memory system but MTL recruitment differs from the young. The inadequate MTL correlation pattern in the elderly is may be caused by the changes of the neural pathway related with explicit memory. These neural changes can explain the decreased performance of PCL in elderly subjects

  3. Neural coding of basic reward terms of animal learning theory, game theory, microeconomics and behavioural ecology.

    Science.gov (United States)

    Schultz, Wolfram

    2004-04-01

    Neurons in a small number of brain structures detect rewards and reward-predicting stimuli and are active during the expectation of predictable food and liquid rewards. These neurons code the reward information according to basic terms of various behavioural theories that seek to explain reward-directed learning, approach behaviour and decision-making. The involved brain structures include groups of dopamine neurons, the striatum including the nucleus accumbens, the orbitofrontal cortex and the amygdala. The reward information is fed to brain structures involved in decision-making and organisation of behaviour, such as the dorsolateral prefrontal cortex and possibly the parietal cortex. The neural coding of basic reward terms derived from formal theories puts the neurophysiological investigation of reward mechanisms on firm conceptual grounds and provides neural correlates for the function of rewards in learning, approach behaviour and decision-making.

  4. Growing adaptive machines combining development and learning in artificial neural networks

    CERN Document Server

    Bredeche, Nicolas; Doursat, René

    2014-01-01

    The pursuit of artificial intelligence has been a highly active domain of research for decades, yielding exciting scientific insights and productive new technologies. In terms of generating intelligence, however, this pursuit has yielded only limited success. This book explores the hypothesis that adaptive growth is a means of moving forward. By emulating the biological process of development, we can incorporate desirable characteristics of natural neural systems into engineered designs, and thus move closer towards the creation of brain-like systems. The particular focus is on how to design artificial neural networks for engineering tasks. The book consists of contributions from 18 researchers, ranging from detailed reviews of recent domains by senior scientists, to exciting new contributions representing the state of the art in machine learning research. The book begins with broad overviews of artificial neurogenesis and bio-inspired machine learning, suitable both as an introduction to the domains and as a...

  5. Optimal Search Strategy of Robotic Assembly Based on Neural Vibration Learning

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2011-01-01

    Full Text Available This paper presents implementation of optimal search strategy (OSS in verification of assembly process based on neural vibration learning. The application problem is the complex robot assembly of miniature parts in the example of mating the gears of one multistage planetary speed reducer. Assembly of tube over the planetary gears was noticed as the most difficult problem of overall assembly. The favourable influence of vibration and rotation movement on compensation of tolerance was also observed. With the proposed neural-network-based learning algorithm, it is possible to find extended scope of vibration state parameter. Using optimal search strategy based on minimal distance path between vibration parameter stage sets (amplitude and frequencies of robots gripe vibration and recovery parameter algorithm, we can improve the robot assembly behaviour, that is, allow the fastest possible way of mating. We have verified by using simulation programs that search strategy is suitable for the situation of unexpected events due to uncertainties.

  6. Neural signatures of second language learning and control.

    Science.gov (United States)

    Bartolotti, James; Bradley, Kailyn; Hernandez, Arturo E; Marian, Viorica

    2017-04-01

    Experience with multiple languages has unique effects on cortical structure and information processing. Differences in gray matter density and patterns of cortical activation are observed in lifelong bilinguals compared to monolinguals as a result of their experience managing interference across languages. Monolinguals who acquire a second language later in life begin to encounter the same type of linguistic interference as bilinguals, but with a different pre-existing language architecture. The current study used functional magnetic resonance imaging to explore the beginning stages of second language acquisition and cross-linguistic interference in monolingual adults. We found that after English monolinguals learned novel Spanish vocabulary, English and Spanish auditory words led to distinct patterns of cortical activation, with greater recruitment of posterior parietal regions in response to English words and of left hippocampus in response to Spanish words. In addition, cross-linguistic interference from English influenced processing of newly-learned Spanish words, decreasing hippocampus activity. Results suggest that monolinguals may rely on different memory systems to process a newly-learned second language, and that the second language system is sensitive to native language interference. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Supervised Learning Based on Temporal Coding in Spiking Neural Networks.

    Science.gov (United States)

    Mostafa, Hesham

    2017-08-01

    Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.

  8. Distributed Cerebellar Motor Learning; a Spike-Timing-Dependent Plasticity Model

    Directory of Open Access Journals (Sweden)

    Niceto Rafael Luque

    2016-03-01

    Full Text Available Deep cerebellar nuclei neurons receive both inhibitory (GABAergic synaptic currents from Purkinje cells (within the cerebellar cortex and excitatory (glutamatergic synaptic currents from mossy fibres. Those two deep cerebellar nucleus inputs are thought to be also adaptive, embedding interesting properties in the framework of accurate movements. We show that distributed spike-timing-dependent plasticity mechanisms (STDP located at different cerebellar sites (parallel fibres to Purkinje cells, mossy fibres to deep cerebellar nucleus cells, and Purkinje cells to deep cerebellar nucleus cells in close-loop simulations provide an explanation for the complex learning properties of the cerebellum in motor learning. Concretely, we propose a new mechanistic cerebellar spiking model. In this new model, deep cerebellar nuclei embed a dual functionality: deep cerebellar nuclei acting as a gain adaptation mechanism and as a facilitator for the slow memory consolidation at mossy fibres to deep cerebellar nucleus synapses. Equipping the cerebellum with excitatory (e-STDP and inhibitory (i-STDP mechanisms at deep cerebellar nuclei afferents allows the accommodation of synaptic memories that were formed at parallel fibres to Purkinje cells synapses and then transferred to mossy fibres to deep cerebellar nucleus synapses. These adaptive mechanisms also contribute to modulate the deep-cerebellar-nucleus-output firing rate (output gain modulation towards optimising its working range.

  9. Research progress on the roles of microRNAs in governing synaptic plasticity, learning and memory.

    Science.gov (United States)

    Wei, Chang-Wei; Luo, Ting; Zou, Shan-Shan; Wu, An-Shi

    2017-11-01

    The importance of non-coding RNA involved in biological processes has become apparent in recent years and the mechanism of transcriptional regulation has also been identified. MicroRNAs (miRNAs) represent a class of small regulatory non-coding RNAs of 22bp in length that mediate gene silencing by identifying specific sequences in the target messenger RNAs (mRNAs). Many miRNAs are highly expressed in the central nervous system in a spatially and temporally controlled manner in normal physiology, as well as in certain pathological conditions. There is growing evidence that a considerable number of specific miRNAs play important roles in synaptic plasticity, learning and memory function. In addition, the dysfunction of these molecules may also contribute to the etiology of several neurodegenerative diseases. Here we provide an overview of the current literatures, which support non-coding RNA-mediated gene function regulation represents an important but underappreciated, layer of epigenetic control that facilitates learning and memory functions. Copyright © 2017. Published by Elsevier Inc.

  10. Dopaminergic mesocortical projections to M1: role in motor learning and motor cortex plasticity

    Directory of Open Access Journals (Sweden)

    Jonas Aurel Hosp

    2013-10-01

    Full Text Available Although the architecture of a dopaminergic (DA system within the primary motorcortex (M1 was well characterized anatomically, its functional significance remainedobscure for a long time. Recent studies in rats revealed that the integrity ofdopaminergic fibers in M1 is a prerequisite for successful acquisition of motor skills.This essential contribution of DA for motor learning is plausible as it modulates M1circuitry at multiple levels thereby promoting plastic changes that are required forinformation storage: at the network level, DA increases cortical excitability andenhances the stability of motor maps. At the cellular level, DA induces the expressionof learning related genes via the transcription factor c-fos. At the level of synapses,DA is required for the formation of long-term potentiation (LTP, a mechanism thatlikely is a fingerprint of a motor memory trace within M1. Dopaminergic fibersinnervating M1 originate within the midbrain, precisely the ventral tegmental area(VTA and the medial portion of substantia nigra (SN. Thus, they could be part of themeso-cortico-limibic pathway – a network that provides information about saliencyand motivational value of an external stimulus and is commonly referred as

  11. Machine learning of radial basis function neural network based on Kalman filter: Introduction

    Directory of Open Access Journals (Sweden)

    Vuković Najdan L.

    2014-01-01

    Full Text Available This paper analyzes machine learning of radial basis function neural network based on Kalman filtering. Three algorithms are derived: linearized Kalman filter, linearized information filter and unscented Kalman filter. We emphasize basic properties of these estimation algorithms, demonstrate how their advantages can be used for optimization of network parameters, derive mathematical models and show how they can be applied to model problems in engineering practice.

  12. DeepNet: An Ultrafast Neural Learning Code for Seismic Imaging

    International Nuclear Information System (INIS)

    Barhen, J.; Protopopescu, V.; Reister, D.

    1999-01-01

    A feed-forward multilayer neural net is trained to learn the correspondence between seismic data and well logs. The introduction of a virtual input layer, connected to the nominal input layer through a special nonlinear transfer function, enables ultrafast (single iteration), near-optimal training of the net using numerical algebraic techniques. A unique computer code, named DeepNet, has been developed, that has achieved, in actual field demonstrations, results unattainable to date with industry standard tools

  13. A Comparative Classification of Wheat Grains for Artificial Neural Network and Extreme Learning Machine

    OpenAIRE

    ASLAN, Muhammet Fatih; SABANCI, Kadir; YİĞİT, Enes; KAYABAŞI, Ahmet; TOKTAŞ, Abdurrahim; DUYSAK, Hüseyin

    2018-01-01

    In this study, classification of two types of wheat grainsinto bread and durum was carried out. The species of wheat grains in thisdataset are bread and durum and these species have equal samples in the datasetas 100 instances. Seven features, including width, height, area, perimeter,roundness, width and perimeter/area were extracted from each wheat grains. Classificationwas separately conducted by Artificial Neural Network (ANN) and Extreme Learning Machine (ELM)artificial intelligence techn...

  14. Identification of chaotic systems by neural network with hybrid learning algorithm

    International Nuclear Information System (INIS)

    Pan, S.-T.; Lai, C.-C.

    2008-01-01

    Based on the genetic algorithm (GA) and steepest descent method (SDM), this paper proposes a hybrid algorithm for the learning of neural networks to identify chaotic systems. The systems in question are the logistic map and the Duffing equation. Different identification schemes are used to identify both the logistic map and the Duffing equation, respectively. Simulation results show that our hybrid algorithm is more efficient than that of other methods

  15. Neural mechanisms of human perceptual learning: electrophysiological evidence for a two-stage process.

    Science.gov (United States)

    Hamamé, Carlos M; Cosmelli, Diego; Henriquez, Rodrigo; Aboitiz, Francisco

    2011-04-26

    Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed. We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30-60 Hz) and alpha (8-14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing. We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.

  16. Neutralization of Nogo-A Enhances Synaptic Plasticity in the Rodent Motor Cortex and Improves Motor Learning in Vivo

    Science.gov (United States)

    Weinmann, Oliver; Kellner, Yves; Yu, Xinzhu; Vicente, Raul; Gullo, Miriam; Kasper, Hansjörg; Lussi, Karin; Ristic, Zorica; Luft, Andreas R.; Rioult-Pedotti, Mengia; Zuo, Yi; Zagrebelsky, Marta; Schwab, Martin E.

    2014-01-01

    The membrane protein Nogo-A is known as an inhibitor of axonal outgrowth and regeneration in the CNS. However, its physiological functions in the normal adult CNS remain incompletely understood. Here, we investigated the role of Nogo-A in cortical synaptic plasticity and motor learning in the uninjured adult rodent motor cortex. Nogo-A and its receptor NgR1 are present at cortical synapses. Acute treatment of slices with function-blocking antibodies (Abs) against Nogo-A or against NgR1 increased long-term potentiation (LTP) induced by stimulation of layer 2/3 horizontal fibers. Furthermore, anti-Nogo-A Ab treatment increased LTP saturation levels, whereas long-term depression remained unchanged, thus leading to an enlarged synaptic modification range. In vivo, intrathecal application of Nogo-A-blocking Abs resulted in a higher dendritic spine density at cortical pyramidal neurons due to an increase in spine formation as revealed by in vivo two-photon microscopy. To investigate whether these changes in synaptic plasticity correlate with motor learning, we trained rats to learn a skilled forelimb-reaching task while receiving anti-Nogo-A Abs. Learning of this cortically controlled precision movement was improved upon anti-Nogo-A Ab treatment. Our results identify Nogo-A as an influential molecular modulator of synaptic plasticity and as a regulator for learning of skilled movements in the motor cortex. PMID:24966370

  17. Statistical Discriminability Estimation for Pattern Classification Based on Neural Incremental Attribute Learning

    DEFF Research Database (Denmark)

    Wang, Ting; Guan, Sheng-Uei; Puthusserypady, Sadasivan

    2014-01-01

    Feature ordering is a significant data preprocessing method in Incremental Attribute Learning (IAL), a novel machine learning approach which gradually trains features according to a given order. Previous research has shown that, similar to feature selection, feature ordering is also important based...... estimation. Moreover, a criterion that summarizes all the produced values of AD is employed with a GA (Genetic Algorithm)-based approach to obtain the optimum feature ordering for classification problems based on neural networks by means of IAL. Compared with the feature ordering obtained by other approaches...

  18. HIERtalker: A default hierarchy of high order neural networks that learns to read English aloud

    Energy Technology Data Exchange (ETDEWEB)

    An, Z.G.; Mniszewski, S.M.; Lee, Y.C.; Papcun, G.; Doolen, G.D.

    1988-01-01

    A new learning algorithm based on a default hierarchy of high order neural networks has been developed that is able to generalize as well as handle exceptions. It learns the ''building blocks'' or clusters of symbols in a stream that appear repeatedly and convey certain messages. The default hierarchy prevents a combinatoric explosion of rules. A simulator of such a hierarchy, HIERtalker, has been applied to the conversion of English words to phonemes. Achieved accuracy is 99% for trained words and ranges from 76% to 96% for sets of new words. 8 refs., 4 figs., 1 tab.

  19. A Model to Explain the Emergence of Reward Expectancy neurons using Reinforcement Learning and Neural Network

    OpenAIRE

    Shinya, Ishii; Munetaka, Shidara; Katsunari, Shibata

    2006-01-01

    In an experiment of multi-trial task to obtain a reward, reward expectancy neurons,###which responded only in the non-reward trials that are necessary to advance###toward the reward, have been observed in the anterior cingulate cortex of monkeys.###In this paper, to explain the emergence of the reward expectancy neuron in###terms of reinforcement learning theory, a model that consists of a recurrent neural###network trained based on reinforcement learning is proposed. The analysis of the###hi...

  20. Research of Dynamic Competitive Learning in Neural Networks

    Institute of Scientific and Technical Information of China (English)

    PAN Hao; CEN Li; ZHONG Luo

    2005-01-01

    Introduce a method of generation of new units within a cluster and a algorithm of generating new clusters.The model automatically builds up its dynamically growing internal representation structure during the learning process.Comparing model with other typical classification algorithm such as the Kohonen's self-organizing map, the model realizes a multilevel classification of the input pattern with an op tional accuracy and gives a strong support possibility for the parallel computational main processor. The idea is suitable for the high level storage of complex datas struetures for object recognition.

  1. Identification of neural connectivity signatures of autism using machine learning

    Directory of Open Access Journals (Sweden)

    Gopikrishna eDeshpande

    2013-10-01

    Full Text Available Alterations in neural connectivity have been suggested as a signature of the pathobiology of autism. Although disrupted correlation between cortical regions observed from functional MRI is considered to be an explanatory model for autism, the directional causal influence between brain regions is a vital link missing in these studies. The current study focuses on addressing this in an fMRI study of Theory-of-Mind in 15 high-functioning adolescents and adults with autism (ASD and 15 typically developing (TD controls. Participants viewed a series of comic strip vignettes in the MRI scanner and were asked to choose the most logical end to the story from three alternatives, separately for trials involving physical and intentional causality. Causal brain connectivity obtained from a multivariate autoregressive model, along with assessment scores, functional connectivity values, and fractional anisotropy obtained from DTI data for each participant, were submitted to a recursive cluster elimination based support vector machine classifier to determine the accuracy with which the classifier can predict a novel participant’s group membership (ASD or TD. We found a maximum classification accuracy of 95.9 % with 19 features which had the highest discriminative ability between the groups. All of the 19 features were effective connectivity paths, indicating that causal information may be critical in discriminating between ASD and TD groups. These effective connectivity paths were also found to be significantly greater in controls as compared to ASD participants and consisted predominantly of outputs from the fusiform face area and middle temporal gyrus indicating impaired connectivity in ASD participants, particularly in the social brain areas. These findings collectively point towards the fact that alterations in causal brain connectivity in individuals with ASD could serve as a potential non-invasive neuroimaging signature for autism

  2. Modeling the behavioral substrates of associate learning and memory - Adaptive neural models

    Science.gov (United States)

    Lee, Chuen-Chien

    1991-01-01

    Three adaptive single-neuron models based on neural analogies of behavior modification episodes are proposed, which attempt to bridge the gap between psychology and neurophysiology. The proposed models capture the predictive nature of Pavlovian conditioning, which is essential to the theory of adaptive/learning systems. The models learn to anticipate the occurrence of a conditioned response before the presence of a reinforcing stimulus when training is complete. Furthermore, each model can find the most nonredundant and earliest predictor of reinforcement. The behavior of the models accounts for several aspects of basic animal learning phenomena in Pavlovian conditioning beyond previous related models. Computer simulations show how well the models fit empirical data from various animal learning paradigms.

  3. PKM-ζ is not required for hippocampal synaptic plasticity, learning and memory.

    Science.gov (United States)

    Volk, Lenora J; Bachman, Julia L; Johnson, Richard; Yu, Yilin; Huganir, Richard L

    2013-01-17

    Long-term potentiation (LTP), a well-characterized form of synaptic plasticity, has long been postulated as a cellular correlate of learning and memory. Although LTP can persist for long periods of time, the mechanisms underlying LTP maintenance, in the midst of ongoing protein turnover and synaptic activity, remain elusive. Sustained activation of the brain-specific protein kinase C (PKC) isoform protein kinase M-ζ (PKM-ζ) has been reported to be necessary for both LTP maintenance and long-term memory. Inhibiting PKM-ζ activity using a synthetic zeta inhibitory peptide (ZIP) based on the PKC-ζ pseudosubstrate sequence reverses established LTP in vitro and in vivo. More notably, infusion of ZIP eliminates memories for a growing list of experience-dependent behaviours, including active place avoidance, conditioned taste aversion, fear conditioning and spatial learning. However, most of the evidence supporting a role for PKM-ζ in LTP and memory relies heavily on pharmacological inhibition of PKM-ζ by ZIP. To further investigate the involvement of PKM-ζ in the maintenance of LTP and memory, we generated transgenic mice lacking PKC-ζ and PKM-ζ. We find that both conventional and conditional PKC-ζ/PKM-ζ knockout mice show normal synaptic transmission and LTP at Schaffer collateral-CA1 synapses, and have no deficits in several hippocampal-dependent learning and memory tasks. Notably, ZIP still reverses LTP in PKC-ζ/PKM-ζ knockout mice, indicating that the effects of ZIP are independent of PKM-ζ.

  4. Plasticity in the adult language system: a longitudinal electrophysiological study on second language learning.

    Science.gov (United States)

    Stein, M; Dierks, T; Brandeis, D; Wirth, M; Strik, W; Koenig, T

    2006-11-01

    Event-related potentials (ERPs) were used to trace changes in brain activity related to progress in second language learning. Twelve English-speaking exchange students learning German in Switzerland were recruited. ERPs to visually presented single words from the subjects' native language (English), second language (German) and an unknown language (Romansh) were measured before (day 1) and after (day 2) 5 months of intense German language learning. When comparing ERPs to German words from day 1 and day 2, we found topographic differences between 396 and 540 ms. These differences could be interpreted as a latency shift indicating faster processing of German words on day 2. Source analysis indicated that the topographic differences were accounted for by shorter activation of left inferior frontal gyrus (IFG) on day 2. In ERPs to English words, we found Global Field Power differences between 472 and 644 ms. This may due to memory traces related to English words being less easily activated on day 2. Alternatively, it might reflect the fact that--with German words becoming familiar on day 2--English words loose their oddball character and thus produce a weaker P300-like effect on day 2. In ERPs to Romansh words, no differences were observed. Our results reflect plasticity in the neuronal networks underlying second language acquisition. They indicate that with a higher level of second language proficiency, second language word processing is faster and requires shorter frontal activation. Thus, our results suggest that the reduced IFG activation found in previous fMRI studies might not reflect a generally lower activation but rather a shorter duration of activity.

  5. Loss of FMRP Impaired Hippocampal Long-Term Plasticity and Spatial Learning in Rats

    Directory of Open Access Journals (Sweden)

    Yonglu Tian

    2017-08-01

    Full Text Available Fragile X syndrome (FXS is a neurodevelopmental disorder caused by mutations in the FMR1 gene that inactivate expression of the gene product, the fragile X mental retardation 1 protein (FMRP. In this study, we used clustered regularly interspaced short palindromic repeats (CRISPR/CRISPR-associated protein 9 (Cas9 technology to generate Fmr1 knockout (KO rats by disruption of the fourth exon of the Fmr1 gene. Western blotting analysis confirmed that the FMRP was absent from the brains of the Fmr1 KO rats (Fmr1exon4-KO. Electrophysiological analysis revealed that the theta-burst stimulation (TBS–induced long-term potentiation (LTP and the low-frequency stimulus (LFS–induced long-term depression (LTD were decreased in the hippocampal Schaffer collateral pathway of the Fmr1exon4-KO rats. Short-term plasticity, measured as the paired-pulse ratio, remained normal in the KO rats. The synaptic strength mediated by the α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR was also impaired. Consistent with previous reports, the Fmr1exon4-KO rats demonstrated an enhanced 3,5-dihydroxyphenylglycine (DHPG–induced LTD in the present study, and this enhancement is insensitive to protein translation. In addition, the Fmr1exon4-KO rats showed deficits in the probe trial in the Morris water maze test. These results demonstrate that deletion of the Fmr1 gene in rats specifically impairs long-term synaptic plasticity and hippocampus-dependent learning in a manner resembling the key symptoms of FXS. Furthermore, the Fmr1exon4-KO rats displayed impaired social interaction and macroorchidism, the results consistent with those observed in patients with FXS. Thus, Fmr1exon4-KO rats constitute a novel rat model of FXS that complements existing mouse models.

  6. Enhancement of Extinction Learning Attenuates Ethanol-Seeking Behavior and Alters Plasticity in the Prefrontal Cortex

    Science.gov (United States)

    Trantham-Davidson, Heather; Kassab, Amanda S.; Glen, William B.; Olive, M. Foster; Chandler, L. Judson

    2014-01-01

    Addiction is a chronic relapsing disorder in which relapse is often initiated by exposure to drug-related cues. The present study examined the effects of mGluR5 activation on extinction of ethanol-cue-maintained responding, relapse-like behavior, and neuronal plasticity. Rats were trained to self-administer ethanol and then exposed to extinction training during which they were administered either vehicle or the mGluR5 positive allosteric modulator 3-cyano-N-(1,3-diphenyl-1H-pyrazol-5-yl) or CDPPB. CDPPB treatment reduced active lever responding during extinction, decreased the total number of extinction sessions required to meet criteria, and attenuated cue-induced reinstatement of ethanol seeking. CDPPB facilitation of extinction was blocked by the local infusion of the mGluR5 antagonist 3-((2-methyl-4-thiazolyl)ethynyl) pyridine into the infralimbic (IfL) cortex, but had no effect when infused into the prelimbic (PrL) cortex. Analysis of dendritic spines revealed alterations in structural plasticity, whereas electrophysiological recordings demonstrated differential alterations in glutamatergic neurotransmission in the PrL and IfL cortex. Extinction was associated with increased amplitude of evoked synaptic PrL and IfL NMDA currents but reduced amplitude of PrL AMPA currents. Treatment with CDPPB prevented the extinction-induced enhancement of NMDA currents in PrL without affecting NMDA currents in the IfL. Whereas CDPPB treatment did not alter the amplitude of PrL or IfL AMPA currents, it did promote the expression of IfL calcium-permeable GluR2-lacking receptors in both abstinence- and extinction-trained rats, but had no effect in ethanol-naive rats. These results confirm changes in the PrL and IfL cortex in glutamatergic neurotransmission during extinction learning and demonstrate that manipulation of mGluR5 facilitates extinction of ethanol cues in association with neuronal plasticity. PMID:24872560

  7. Enhancement of extinction learning attenuates ethanol-seeking behavior and alters plasticity in the prefrontal cortex.

    Science.gov (United States)

    Gass, Justin T; Trantham-Davidson, Heather; Kassab, Amanda S; Glen, William B; Olive, M Foster; Chandler, L Judson

    2014-05-28

    Addiction is a chronic relapsing disorder in which relapse is often initiated by exposure to drug-related cues. The present study examined the effects of mGluR5 activation on extinction of ethanol-cue-maintained responding, relapse-like behavior, and neuronal plasticity. Rats were trained to self-administer ethanol and then exposed to extinction training during which they were administered either vehicle or the mGluR5 positive allosteric modulator 3-cyano-N-(1,3-diphenyl-1H-pyrazol-5-yl) or CDPPB. CDPPB treatment reduced active lever responding during extinction, decreased the total number of extinction sessions required to meet criteria, and attenuated cue-induced reinstatement of ethanol seeking. CDPPB facilitation of extinction was blocked by the local infusion of the mGluR5 antagonist 3-((2-methyl-4-thiazolyl)ethynyl) pyridine into the infralimbic (IfL) cortex, but had no effect when infused into the prelimbic (PrL) cortex. Analysis of dendritic spines revealed alterations in structural plasticity, whereas electrophysiological recordings demonstrated differential alterations in glutamatergic neurotransmission in the PrL and IfL cortex. Extinction was associated with increased amplitude of evoked synaptic PrL and IfL NMDA currents but reduced amplitude of PrL AMPA currents. Treatment with CDPPB prevented the extinction-induced enhancement of NMDA currents in PrL without affecting NMDA currents in the IfL. Whereas CDPPB treatment did not alter the amplitude of PrL or IfL AMPA currents, it did promote the expression of IfL calcium-permeable GluR2-lacking receptors in both abstinence- and extinction-trained rats, but had no effect in ethanol-naive rats. These results confirm changes in the PrL and IfL cortex in glutamatergic neurotransmission during extinction learning and demonstrate that manipulation of mGluR5 facilitates extinction of ethanol cues in association with neuronal plasticity. Copyright © 2014 the authors 0270-6474/14/347562-13$15.00/0.

  8. Application of different entropy formalisms in a neural network for novel word learning

    Science.gov (United States)

    Khordad, R.; Rastegar Sedehi, H. R.

    2015-12-01

    In this paper novel word learning in adults is studied. For this goal, four entropy formalisms are employed to include some degree of non-locality in a neural network. The entropy formalisms are Tsallis, Landsberg-Vedral, Kaniadakis, and Abe entropies. First, we have analytically obtained non-extensive cost functions for the all entropies. Then, we have used a generalization of the gradient descent dynamics as a learning rule in a simple perceptron. The Langevin equations are numerically solved and the error function (learning curve) is obtained versus time for different values of the parameters. The influence of index q and number of neuron N on learning is investigated for the all entropies. It is found that learning is a decreasing function of time for the all entropies. The rate of learning for the Landsberg-Vedral entropy is slower than other entropies. The variation of learning with time for the Landsberg-Vedral entropy is not appreciable when the number of neurons increases. It is said that entropy formalism can be used as a means for studying the learning.

  9. Neural oscillatory mechanisms during novel grammar learning underlying language analytical abilities.

    Science.gov (United States)

    Kepinska, Olga; Pereda, Ernesto; Caspers, Johanneke; Schiller, Niels O

    2017-12-01

    The goal of the present study was to investigate the initial phases of novel grammar learning on a neural level, concentrating on mechanisms responsible for individual variability between learners. Two groups of participants, one with high and one with average language analytical abilities, performed an Artificial Grammar Learning (AGL) task consisting of learning and test phases. During the task, EEG signals from 32 cap-mounted electrodes were recorded and epochs corresponding to the learning phases were analysed. We investigated spectral power modulations over time, and functional connectivity patterns by means of a bivariate, frequency-specific index of phase synchronization termed Phase Locking Value (PLV). Behavioural data showed learning effects in both groups, with a steeper learning curve and higher ultimate attainment for the highly skilled learners. Moreover, we established that cortical connectivity patterns and profiles of spectral power modulations over time differentiated L2 learners with various levels of language analytical abilities. Over the course of the task, the learning process seemed to be driven by whole-brain functional connectivity between neuronal assemblies achieved by means of communication in the beta band frequency. On a shorter time-scale, increasing proficiency on the AGL task appeared to be supported by stronger local synchronisation within the right hemisphere regions. Finally, we observed that the highly skilled learners might have exerted less mental effort, or reduced attention for the task at hand once the learning was achieved, as evidenced by the higher alpha band power. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Neural Control of a Tracking Task via Attention-Gated Reinforcement Learning for Brain-Machine Interfaces.

    Science.gov (United States)

    Wang, Yiwen; Wang, Fang; Xu, Kai; Zhang, Qiaosheng; Zhang, Shaomin; Zheng, Xiaoxiang

    2015-05-01

    Reinforcement learning (RL)-based brain machine interfaces (BMIs) enable the user to learn from the environment through interactions to complete the task without desired signals, which is promising for clinical applications. Previous studies exploited Q-learning techniques to discriminate neural states into simple directional actions providing the trial initial timing. However, the movements in BMI applications can be quite complicated, and the action timing explicitly shows the intention when to move. The rich actions and the corresponding neural states form a large state-action space, imposing generalization difficulty on Q-learning. In this paper, we propose to adopt attention-gated reinforcement learning (AGREL) as a new learning scheme for BMIs to adaptively decode high-dimensional neural activities into seven distinct movements (directional moves, holdings and resting) due to the efficient weight-updating. We apply AGREL on neural data recorded from M1 of a monkey to directly predict a seven-action set in a time sequence to reconstruct the trajectory of a center-out task. Compared to Q-learning techniques, AGREL could improve the target acquisition rate to 90.16% in average with faster convergence and more stability to follow neural activity over multiple days, indicating the potential to achieve better online decoding performance for more complicated BMI tasks.

  11. A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language.

    Directory of Open Access Journals (Sweden)

    Bruno Golosio

    Full Text Available Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.

  12. A memristive plasticity model of voltage-based STDP suitable for recurrent bidirectional neural networks in the hippocampus.

    Science.gov (United States)

    Diederich, Nick; Bartsch, Thorsten; Kohlstedt, Hermann; Ziegler, Martin

    2018-06-19

    Memristive systems have gained considerable attention in the field of neuromorphic engineering, because they allow the emulation of synaptic functionality in solid state nano-physical systems. In this study, we show that memristive behavior provides a broad working framework for the phenomenological modelling of cellular synaptic mechanisms. In particular, we seek to understand how close a memristive system can account for the biological realism. The basic characteristics of memristive systems, i.e. voltage and memory behavior, are used to derive a voltage-based plasticity rule. We show that this model is suitable to account for a variety of electrophysiology plasticity data. Furthermore, we incorporate the plasticity model into an all-to-all connecting network scheme. Motivated by the auto-associative CA3 network of the hippocampus, we show that the implemented network allows the discrimination and processing of mnemonic pattern information, i.e. the formation of functional bidirectional connections resulting in the formation of local receptive fields. Since the presented plasticity model can be applied to real memristive devices as well, the presented theoretical framework can support both, the design of appropriate memristive devices for neuromorphic computing and the development of complex neuromorphic networks, which account for the specific advantage of memristive devices.

  13. SORN: a self-organizing recurrent neural network

    Directory of Open Access Journals (Sweden)

    Andreea Lazar

    2009-10-01

    Full Text Available Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success.

  14. Using Deep Learning Neural Networks To Find Best Performing Audience Segments

    Directory of Open Access Journals (Sweden)

    Anup Badhe

    2015-08-01

    Full Text Available Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning neural networks have been used in machine learning to use multiple processing layers to interpret large datasets with multiple dimensions to come up with a high-level characterization of the data. During a request for an advertisement and subsequently serving of the advertisement on the mobile device there are many trackers that are fired collecting a lot of data points. If the user likes the advertisement and clicks on it another set of trackers give additional information resulting from the click. This information is aggregated by the ad server and shown in its reporting console. The same information can form the basis of machine learning by feeding this information to a deep learning neural network to come up with audiences that can be targeted based on the product that is advertised.

  15. Individual differences in sensitivity to reward and punishment and neural activity during reward and avoidance learning.

    Science.gov (United States)

    Kim, Sang Hee; Yoon, HeungSik; Kim, Hackjin; Hamann, Stephan

    2015-09-01

    In this functional neuroimaging study, we investigated neural activations during the process of learning to gain monetary rewards and to avoid monetary loss, and how these activations are modulated by individual differences in reward and punishment sensitivity. Healthy young volunteers performed a reinforcement learning task where they chose one of two fractal stimuli associated with monetary gain (reward trials) or avoidance of monetary loss (avoidance trials). Trait sensitivity to reward and punishment was assessed using the behavioral inhibition/activation scales (BIS/BAS). Functional neuroimaging results showed activation of the striatum during the anticipation and reception periods of reward trials. During avoidance trials, activation of the dorsal striatum and prefrontal regions was found. As expected, individual differences in reward sensitivity were positively associated with activation in the left and right ventral striatum during reward reception. Individual differences in sensitivity to punishment were negatively associated with activation in the left dorsal striatum during avoidance anticipation and also with activation in the right lateral orbitofrontal cortex during receiving monetary loss. These results suggest that learning to attain reward and learning to avoid loss are dependent on separable sets of neural regions whose activity is modulated by trait sensitivity to reward or punishment. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  16. Recurrent fuzzy neural network by using feedback error learning approaches for LFC in interconnected power system

    International Nuclear Information System (INIS)

    Sabahi, Kamel; Teshnehlab, Mohammad; Shoorhedeli, Mahdi Aliyari

    2009-01-01

    In this study, a new adaptive controller based on modified feedback error learning (FEL) approaches is proposed for load frequency control (LFC) problem. The FEL strategy consists of intelligent and conventional controllers in feedforward and feedback paths, respectively. In this strategy, a conventional feedback controller (CFC), i.e. proportional, integral and derivative (PID) controller, is essential to guarantee global asymptotic stability of the overall system; and an intelligent feedforward controller (INFC) is adopted to learn the inverse of the controlled system. Therefore, when the INFC learns the inverse of controlled system, the tracking of reference signal is done properly. Generally, the CFC is designed at nominal operating conditions of the system and, therefore, fails to provide the best control performance as well as global stability over a wide range of changes in the operating conditions of the system. So, in this study a supervised controller (SC), a lookup table based controller, is addressed for tuning of the CFC. During abrupt changes of the power system parameters, the SC adjusts the PID parameters according to these operating conditions. Moreover, for improving the performance of overall system, a recurrent fuzzy neural network (RFNN) is adopted in INFC instead of the conventional neural network, which was used in past studies. The proposed FEL controller has been compared with the conventional feedback error learning controller (CFEL) and the PID controller through some performance indices

  17. Nuclear power plant monitoring using real-time learning neural network

    International Nuclear Information System (INIS)

    Nabeshima, Kunihiko; Tuerkcan, E.; Ciftcioglu, O.

    1994-01-01

    In the present research, artificial neural network (ANN) with real-time adaptive learning is developed for the plant wide monitoring of Borssele Nuclear Power Plant (NPP). Adaptive ANN learning capability is integrated to the monitoring system so that robust and sensitive on-line monitoring is achieved in real-time environment. The major advantages provided by ANN are that system modelling is formed by means of measurement information obtained from a multi-output process system, explicit modelling is not required and the modelling is not restricted to linear systems. Also ANN can respond very fast to anomalous operational conditions. The real-time ANN learning methodology with adaptive real-time monitoring capability is described below for the wide-range and plant-wide data from an operating nuclear power plant. The layered neural network with error backpropagation algorithm for learning has three layers. The network type is auto-associative, inputs and outputs are exactly the same, using 12 plant signals. (author)

  18. Convolutional Neural Network Based on Extreme Learning Machine for Maritime Ships Recognition in Infrared Images.

    Science.gov (United States)

    Khellal, Atmane; Ma, Hongbin; Fei, Qing

    2018-05-09

    The success of Deep Learning models, notably convolutional neural networks (CNNs), makes them the favorable solution for object recognition systems in both visible and infrared domains. However, the lack of training data in the case of maritime ships research leads to poor performance due to the problem of overfitting. In addition, the back-propagation algorithm used to train CNN is very slow and requires tuning many hyperparameters. To overcome these weaknesses, we introduce a new approach fully based on Extreme Learning Machine (ELM) to learn useful CNN features and perform a fast and accurate classification, which is suitable for infrared-based recognition systems. The proposed approach combines an ELM based learning algorithm to train CNN for discriminative features extraction and an ELM based ensemble for classification. The experimental results on VAIS dataset, which is the largest dataset of maritime ships, confirm that the proposed approach outperforms the state-of-the-art models in term of generalization performance and training speed. For instance, the proposed model is up to 950 times faster than the traditional back-propagation based training of convolutional neural networks, primarily for low-level features extraction.

  19. Neural substrates underlying stimulation-enhanced motor skill learning after stroke.

    Science.gov (United States)

    Lefebvre, Stéphanie; Dricot, Laurence; Laloux, Patrice; Gradkowski, Wojciech; Desfontaines, Philippe; Evrard, Frédéric; Peeters, André; Jamart, Jacques; Vandermeeren, Yves

    2015-01-01

    Motor skill learning is one of the key components of motor function recovery after stroke, especially recovery driven by neurorehabilitation. Transcranial direct current stimulation can enhance neurorehabilitation and motor skill learning in stroke patients. However, the neural mechanisms underlying the retention of stimulation-enhanced motor skill learning involving a paretic upper limb have not been resolved. These neural substrates were explored by means of functional magnetic resonance imaging. Nineteen chronic hemiparetic stroke patients participated in a double-blind, cross-over randomized, sham-controlled experiment with two series. Each series consisted of two sessions: (i) an intervention session during which dual transcranial direct current stimulation or sham was applied during motor skill learning with the paretic upper limb; and (ii) an imaging session 1 week later, during which the patients performed the learned motor skill. The motor skill learning task, called the 'circuit game', involves a speed/accuracy trade-off and consists of moving a pointer controlled by a computer mouse along a complex circuit as quickly and accurately as possible. Relative to the sham series, dual transcranial direct current stimulation applied bilaterally over the primary motor cortex during motor skill learning with the paretic upper limb resulted in (i) enhanced online motor skill learning; (ii) enhanced 1-week retention; and (iii) superior transfer of performance improvement to an untrained task. The 1-week retention's enhancement driven by the intervention was associated with a trend towards normalization of the brain activation pattern during performance of the learned motor skill relative to the sham series. A similar trend towards normalization relative to sham was observed during performance of a simple, untrained task without a speed/accuracy constraint, despite a lack of behavioural difference between the dual transcranial direct current stimulation and sham

  20. Multivariate Cross-Classification: Applying machine learning techniques to characterize abstraction in neural representations

    Directory of Open Access Journals (Sweden)

    Jonas eKaplan

    2015-03-01

    Full Text Available Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC, and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application.

  1. Neural correlates of reward-based spatial learning in persons with cocaine dependence.

    Science.gov (United States)

    Tau, Gregory Z; Marsh, Rachel; Wang, Zhishun; Torres-Sanchez, Tania; Graniello, Barbara; Hao, Xuejun; Xu, Dongrong; Packard, Mark G; Duan, Yunsuo; Kangarlu, Alayar; Martinez, Diana; Peterson, Bradley S

    2014-02-01

    Dysfunctional learning systems are thought to be central to the pathogenesis of and impair recovery from addictions. The functioning of the brain circuits for episodic memory or learning that support goal-directed behavior has not been studied previously in persons with cocaine dependence (CD). Thirteen abstinent CD and 13 healthy participants underwent MRI scanning while performing a task that requires the use of spatial cues to navigate a virtual-reality environment and find monetary rewards, allowing the functional assessment of the brain systems for spatial learning, a form of episodic memory. Whereas both groups performed similarly on the reward-based spatial learning task, we identified disturbances in brain regions involved in learning and reward in CD participants. In particular, CD was associated with impaired functioning of medial temporal lobe (MTL), a brain region that is crucial for spatial learning (and episodic memory) with concomitant recruitment of striatum (which normally participates in stimulus-response, or habit, learning), and prefrontal cortex. CD was also associated with enhanced sensitivity of the ventral striatum to unexpected rewards but not to expected rewards earned during spatial learning. We provide evidence that spatial learning in CD is characterized by disturbances in functioning of an MTL-based system for episodic memory and a striatum-based system for stimulus-response learning and reward. We have found additional abnormalities in distributed cortical regions. Consistent with findings from animal studies, we provide the first evidence in humans describing the disruptive effects of cocaine on the coordinated functioning of multiple neural systems for learning and memory.

  2. Neural correlates of testing effects in vocabulary learning.

    Science.gov (United States)

    van den Broek, Gesa S E; Takashima, Atsuko; Segers, Eliane; Fernández, Guillén; Verhoeven, Ludo

    2013-09-01

    Tests that require memory retrieval strongly improve long-term retention in comparison to continued studying. For example, once learners know the translation of a word, restudy practice, during which they see the word and its translation again, is less effective than testing practice, during which they see only the word and retrieve the translation from memory. In the present functional magnetic resonance imaging (fMRI) study, we investigated the neuro-cognitive mechanisms underlying this striking testing effect. Twenty-six young adults without prior knowledge of Swahili learned the translation of 100 Swahili words and then further practiced the words in an fMRI scanner by restudying or by testing. Recall of the translations on a final memory test after one week was significantly better and faster for tested words than for restudied words. Brain regions that were more active during testing than during restudying included the left inferior frontal gyrus, ventral striatum, and midbrain areas. Increased activity in the left inferior parietal and left middle temporal areas during testing but not during restudying predicted better recall on the final memory test. Together, results suggest that testing may be more beneficial than restudying due to processes related to targeted semantic elaboration and selective strengthening of associations between retrieval cues and relevant responses, and may involve increased effortful cognitive control and modulations of memory through striatal motivation and reward circuits. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Strain-dependent variations in spatial learning and in hippocampal synaptic plasticity in the dentate gyrus of freely behaving rats

    Directory of Open Access Journals (Sweden)

    Denise eManahan-Vaughan

    2011-03-01

    Full Text Available Hippocampal synaptic plasticity is believed to comprise the cellular basis for spatial learning. Strain-dependent differences in synaptic plasticity in the CA1 region have been reported. However, it is not known whether these differences extend to other synapses within the trisynaptic circuit, although there is evidence for morphological variations within that path. We investigated whether Wistar and Hooded Lister (HL rat strains express differences in synaptic plasticity in the dentate gyrus in vivo. We also explored whether they exhibit differences in the ability to engage in spatial learning in an 8-arm radial maze. Basal synaptic transmission was stable over a 24h period in both rat strains, and the input-output relationship of both strains was not significantly different. Paired-pulse analysis revealed significantly less paired-pulse facilitation in the Hooded Lister strain when pulses were given 40-100 msec apart. Low frequency stimulation at 1Hz evoked long-term depression (>24h in Wistar and short-term depression (<2h in HL rats; 200Hz stimulation induced long-term potentiation (>24h in Wistar, and a transient, significantly smaller potentiation (<1h in HL rats, suggesting that HL rats have higher thresholds for expression of persistent synaptic plasticity. Training for 10d in an 8-arm radial maze revealed that HL rats master the working memory task faster than Wistar rats, although both strains show an equivalent performance by the end of the trial period. HL rats also perform more efficiently in a double working and reference memory task. On the other hand, Wistar rats show better reference memory performance on the final (8-10 days of training. Wistar rats were less active and more anxious than HL rats.These data suggest that strain-dependent variations in hippocampal synaptic plasticity occur in different hippocampal synapses. A clear correlation with differences in spatial learning is not evident however.

  4. Accelerating learning of neural networks with conjugate gradients for nuclear power plant applications

    International Nuclear Information System (INIS)

    Reifman, J.; Vitela, J.E.

    1994-01-01

    The method of conjugate gradients is used to expedite the learning process of feedforward multilayer artificial neural networks and to systematically update both the learning parameter and the momentum parameter at each training cycle. The mechanism for the occurrence of premature saturation of the network nodes observed with the back propagation algorithm is described, suggestions are made to eliminate this undesirable phenomenon, and the reason by which this phenomenon is precluded in the method of conjugate gradients is presented. The proposed method is compared with the standard back propagation algorithm in the training of neural networks to classify transient events in neural power plants simulated by the Midland Nuclear Power Plant Unit 2 simulator. The comparison results indicate that the rate of convergence of the proposed method is much greater than the standard back propagation, that it reduces both the number of training cycles and the CPU time, and that it is less sensitive to the choice of initial weights. The advantages of the method are more noticeable and important for problems where the network architecture consists of a large number of nodes, the training database is large, and a tight convergence criterion is desired

  5. A role for adult TLX-positive neural stem cells in learning and behaviour.

    Science.gov (United States)

    Zhang, Chun-Li; Zou, Yuhua; He, Weimin; Gage, Fred H; Evans, Ronald M

    2008-02-21

    Neurogenesis persists in the adult brain and can be regulated by a plethora of external stimuli, such as learning, memory, exercise, environment and stress. Although newly generated neurons are able to migrate and preferentially incorporate into the neural network, how these cells are molecularly regulated and whether they are required for any normal brain function are unresolved questions. The adult neural stem cell pool is composed of orphan nuclear receptor TLX-positive cells. Here, using genetic approaches in mice, we demonstrate that TLX (also called NR2E1) regulates adult neural stem cell proliferation in a cell-autonomous manner by controlling a defined genetic network implicated in cell proliferation and growth. Consequently, specific removal of TLX from the adult mouse brain through inducible recombination results in a significant reduction of stem cell proliferation and a marked decrement in spatial learning. In contrast, the resulting suppression of adult neurogenesis does not affect contextual fear conditioning, locomotion or diurnal rhythmic activities, indicating a more selective contribution of newly generated neurons to specific cognitive functions.