WorldWideScience

Sample records for learning promotes neural

  1. Rhesus monkey neural stem cell transplantation promotes neural regeneration in rats with hippocampal lesions

    Directory of Open Access Journals (Sweden)

    Li-juan Ye

    2016-01-01

    Full Text Available Rhesus monkey neural stem cells are capable of differentiating into neurons and glial cells. Therefore, neural stem cell transplantation can be used to promote functional recovery of the nervous system. Rhesus monkey neural stem cells (1 × 105 cells/μL were injected into bilateral hippocampi of rats with hippocampal lesions. Confocal laser scanning microscopy demonstrated that green fluorescent protein-labeled transplanted cells survived and grew well. Transplanted cells were detected at the lesion site, but also in the nerve fiber-rich region of the cerebral cortex and corpus callosum. Some transplanted cells differentiated into neurons and glial cells clustering along the ventricular wall, and integrated into the recipient brain. Behavioral tests revealed that spatial learning and memory ability improved, indicating that rhesus monkey neural stem cells noticeably improve spatial learning and memory abilities in rats with hippocampal lesions.

  2. Shaping the learning curve: epigenetic dynamics in neural plasticity

    Directory of Open Access Journals (Sweden)

    Zohar Ziv Bronfman

    2014-07-01

    Full Text Available A key characteristic of learning and neural plasticity is state-dependent acquisition dynamics reflected by the non-linear learning curve that links increase in learning with practice. Here we propose that the manner by which epigenetic states of individual cells change during learning contributes to the shape of the neural and behavioral learning curve. We base our suggestion on recent studies showing that epigenetic mechanisms such as DNA methylation, histone acetylation and RNA-mediated gene regulation are intimately involved in the establishment and maintenance of long-term neural plasticity, reflecting specific learning-histories and influencing future learning. Our model, which is the first to suggest a dynamic molecular account of the shape of the learning curve, leads to several testable predictions regarding the link between epigenetic dynamics at the promoter, gene-network and neural-network levels. This perspective opens up new avenues for therapeutic interventions in neurological pathologies.

  3. Developmental song learning as a model to understand neural mechanisms that limit and promote the ability to learn.

    Science.gov (United States)

    London, Sarah E

    2017-11-20

    Songbirds famously learn their vocalizations. Some species can learn continuously, others seasonally, and still others just once. The zebra finch (Taeniopygia guttata) learns to sing during a single developmental "Critical Period," a restricted phase during which a specific experience has profound and permanent effects on brain function and behavioral patterns. The zebra finch can therefore provide fundamental insight into features that promote and limit the ability to acquire complex learned behaviors. For example, what properties permit the brain to come "on-line" for learning? How does experience become encoded to prevent future learning? What features define the brain in receptive compared to closed learning states? This piece will focus on epigenomic, genomic, and molecular levels of analysis that operate on the timescales of development and complex behavioral learning. Existing data will be discussed as they relate to Critical Period learning, and strategies for future studies to more directly address these questions will be considered. Birdsong learning is a powerful model for advancing knowledge of the biological intersections of maturation and experience. Lessons from its study not only have implications for understanding developmental song learning, but also broader questions of learning potential and the enduring effects of early life experience on neural systems and behavior. Copyright © 2017. Published by Elsevier B.V.

  4. Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks

    KAUST Repository

    Umarov, Ramzan

    2017-02-03

    Accurate computational identification of promoters remains a challenge as these key DNA regulatory regions have variable structures composed of functional motifs that provide gene-specific initiation of transcription. In this paper we utilize Convolutional Neural Networks (CNN) to analyze sequence characteristics of prokaryotic and eukaryotic promoters and build their predictive models. We trained a similar CNN architecture on promoters of five distant organisms: human, mouse, plant (Arabidopsis), and two bacteria (Escherichia coli and Bacillus subtilis). We found that CNN trained on sigma70 subclass of Escherichia coli promoter gives an excellent classification of promoters and non-promoter sequences (Sn = 0.90, Sp = 0.96, CC = 0.84). The Bacillus subtilis promoters identification CNN model achieves Sn = 0.91, Sp = 0.95, and CC = 0.86. For human, mouse and Arabidopsis promoters we employed CNNs for identification of two well-known promoter classes (TATA and non-TATA promoters). CNN models nicely recognize these complex functional regions. For human promoters Sn/Sp/CC accuracy of prediction reached 0.95/0.98/0,90 on TATA and 0.90/0.98/0.89 for non-TATA promoter sequences, respectively. For Arabidopsis we observed Sn/Sp/CC 0.95/0.97/0.91 (TATA) and 0.94/0.94/0.86 (non-TATA) promoters. Thus, the developed CNN models, implemented in CNNProm program, demonstrated the ability of deep learning approach to grasp complex promoter sequence characteristics and achieve significantly higher accuracy compared to the previously developed promoter prediction programs. We also propose random substitution procedure to discover positionally conserved promoter functional elements. As the suggested approach does not require knowledge of any specific promoter features, it can be easily extended to identify promoters and other complex functional regions in sequences of many other and especially newly sequenced genomes. The CNNProm program is available to run at web server http://www.softberry.com.

  5. Learning from neural control.

    Science.gov (United States)

    Wang, Cong; Hill, David J

    2006-01-01

    One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach.

  6. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    Science.gov (United States)

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  7. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  8. Opponent appetitive-aversive neural processes underlie predictive learning of pain relief.

    Science.gov (United States)

    Seymour, Ben; O'Doherty, John P; Koltzenburg, Martin; Wiech, Katja; Frackowiak, Richard; Friston, Karl; Dolan, Raymond

    2005-09-01

    Termination of a painful or unpleasant event can be rewarding. However, whether the brain treats relief in a similar way as it treats natural reward is unclear, and the neural processes that underlie its representation as a motivational goal remain poorly understood. We used fMRI (functional magnetic resonance imaging) to investigate how humans learn to generate expectations of pain relief. Using a pavlovian conditioning procedure, we show that subjects experiencing prolonged experimentally induced pain can be conditioned to predict pain relief. This proceeds in a manner consistent with contemporary reward-learning theory (average reward/loss reinforcement learning), reflected by neural activity in the amygdala and midbrain. Furthermore, these reward-like learning signals are mirrored by opposite aversion-like signals in lateral orbitofrontal cortex and anterior cingulate cortex. This dual coding has parallels to 'opponent process' theories in psychology and promotes a formal account of prediction and expectation during pain.

  9. Entropy Learning in Neural Network

    Directory of Open Access Journals (Sweden)

    Geok See Ng

    2017-12-01

    Full Text Available In this paper, entropy term is used in the learning phase of a neural network.  As learning progresses, more hidden nodes get into saturation.  The early creation of such hidden nodes may impair generalisation.  Hence entropy approach is proposed to dampen the early creation of such nodes.  The entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes.  At the end of learning, the less important nodes can then be eliminated to reduce the memory requirements of the neural network.

  10. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  11. Neural plasticity of development and learning.

    Science.gov (United States)

    Galván, Adriana

    2010-06-01

    Development and learning are powerful agents of change across the lifespan that induce robust structural and functional plasticity in neural systems. An unresolved question in developmental cognitive neuroscience is whether development and learning share the same neural mechanisms associated with experience-related neural plasticity. In this article, I outline the conceptual and practical challenges of this question, review insights gleaned from adult studies, and describe recent strides toward examining this topic across development using neuroimaging methods. I suggest that development and learning are not two completely separate constructs and instead, that they exist on a continuum. While progressive and regressive changes are central to both, the behavioral consequences associated with these changes are closely tied to the existing neural architecture of maturity of the system. Eventually, a deeper, more mechanistic understanding of neural plasticity will shed light on behavioral changes across development and, more broadly, about the underlying neural basis of cognition. (c) 2010 Wiley-Liss, Inc.

  12. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  13. Post-learning hippocampal dynamics promote preferential retention of rewarding events

    Science.gov (United States)

    Gruber, Matthias J.; Ritchey, Maureen; Wang, Shao-Fang; Doss, Manoj K.; Ranganath, Charan

    2016-01-01

    Reward motivation is known to modulate memory encoding, and this effect depends on interactions between the substantia nigra/ ventral tegmental area complex (SN/VTA) and the hippocampus. It is unknown, however, whether these interactions influence offline neural activity in the human brain that is thought to promote memory consolidation. Here, we used functional magnetic resonance imaging (fMRI) to test the effect of reward motivation on post-learning neural dynamics and subsequent memory for objects that were learned in high- or low-reward motivation contexts. We found that post-learning increases in resting-state functional connectivity between the SN/VTA and hippocampus predicted preferential retention of objects that were learned in high-reward contexts. In addition, multivariate pattern classification revealed that hippocampal representations of high-reward contexts were preferentially reactivated during post-learning rest, and the number of hippocampal reactivations was predictive of preferential retention of items learned in high-reward contexts. These findings indicate that reward motivation alters offline post-learning dynamics between the SN/VTA and hippocampus, providing novel evidence for a potential mechanism by which reward could influence memory consolidation. PMID:26875624

  14. Biologically based neural circuit modelling for the study of fear learning and extinction

    Science.gov (United States)

    Nair, Satish S.; Paré, Denis; Vicentic, Aleksandra

    2016-11-01

    The neuronal systems that promote protective defensive behaviours have been studied extensively using Pavlovian conditioning. In this paradigm, an initially neutral-conditioned stimulus is paired with an aversive unconditioned stimulus leading the subjects to display behavioural signs of fear. Decades of research into the neural bases of this simple behavioural paradigm uncovered that the amygdala, a complex structure comprised of several interconnected nuclei, is an essential part of the neural circuits required for the acquisition, consolidation and expression of fear memory. However, emerging evidence from the confluence of electrophysiological, tract tracing, imaging, molecular, optogenetic and chemogenetic methodologies, reveals that fear learning is mediated by multiple connections between several amygdala nuclei and their distributed targets, dynamical changes in plasticity in local circuit elements as well as neuromodulatory mechanisms that promote synaptic plasticity. To uncover these complex relations and analyse multi-modal data sets acquired from these studies, we argue that biologically realistic computational modelling, in conjunction with experiments, offers an opportunity to advance our understanding of the neural circuit mechanisms of fear learning and to address how their dysfunction may lead to maladaptive fear responses in mental disorders.

  15. Learning in Artificial Neural Systems

    Science.gov (United States)

    Matheus, Christopher J.; Hohensee, William E.

    1987-01-01

    This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined.

  16. Investigations of Escherichia coli promoter sequences with artificial neural networks: new signals discovered upstream of the transcriptional startpoint

    DEFF Research Database (Denmark)

    Pedersen, Anders Gorm; Engelbrecht, Jacob

    1995-01-01

    We present a novel method for using the learning ability of a neural network as a measure of information in local regions of input data. Using the method to analyze Escherichia coli promoters, we discover all previously described signals, and furthermore find new signals that are regularly spaced...

  17. Machine Learning Topological Invariants with Neural Networks

    Science.gov (United States)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  18. Continual Learning through Evolvable Neural Turing Machines

    DEFF Research Database (Denmark)

    Lüders, Benno; Schläger, Mikkel; Risi, Sebastian

    2016-01-01

    Continual learning, i.e. the ability to sequentially learn tasks without catastrophic forgetting of previously learned ones, is an important open challenge in machine learning. In this paper we take a step in this direction by showing that the recently proposed Evolving Neural Turing Machine (ENTM...

  19. Neural Behavior Chain Learning of Mobile Robot Actions

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2012-01-01

    Full Text Available This paper presents a visual/motor behavior learning approach, based on neural networks. We propose Behavior Chain Model (BCM in order to create a way of behavior learning. Our behavior-based system evolution task is a mobile robot detecting a target and driving/acting towards it. First, the mapping relations between the image feature domain of the object and the robot action domain are derived. Second, a multilayer neural network for offline learning of the mapping relations is used. This learning structure through neural network training process represents a connection between the visual perceptions and motor sequence of actions in order to grip a target. Last, using behavior learning through a noticed action chain, we can predict mobile robot behavior for a variety of similar tasks in similar environment. Prediction results suggest that the methodology is adequate and could be recognized as an idea for designing different mobile robot behaviour assistance.

  20. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Learning-induced neural plasticity of speech processing before birth.

    Science.gov (United States)

    Partanen, Eino; Kujala, Teija; Näätänen, Risto; Liitola, Auli; Sambeth, Anke; Huotilainen, Minna

    2013-09-10

    Learning, the foundation of adaptive and intelligent behavior, is based on plastic changes in neural assemblies, reflected by the modulation of electric brain responses. In infancy, auditory learning implicates the formation and strengthening of neural long-term memory traces, improving discrimination skills, in particular those forming the prerequisites for speech perception and understanding. Although previous behavioral observations show that newborns react differentially to unfamiliar sounds vs. familiar sound material that they were exposed to as fetuses, the neural basis of fetal learning has not thus far been investigated. Here we demonstrate direct neural correlates of human fetal learning of speech-like auditory stimuli. We presented variants of words to fetuses; unlike infants with no exposure to these stimuli, the exposed fetuses showed enhanced brain activity (mismatch responses) in response to pitch changes for the trained variants after birth. Furthermore, a significant correlation existed between the amount of prenatal exposure and brain activity, with greater activity being associated with a higher amount of prenatal speech exposure. Moreover, the learning effect was generalized to other types of similar speech sounds not included in the training material. Consequently, our results indicate neural commitment specifically tuned to the speech features heard before birth and their memory representations.

  2. Embedding responses in spontaneous neural activity shaped through sequential learning.

    Directory of Open Access Journals (Sweden)

    Tomoki Kurikawa

    Full Text Available Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, "memories-as-bifurcations," that differs from the traditional "memories-as-attractors" viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in

  3. Learning of N-layers neural network

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2005-01-01

    Full Text Available In the last decade we can observe increasing number of applications based on the Artificial Intelligence that are designed to solve problems from different areas of human activity. The reason why there is so much interest in these technologies is that the classical way of solutions does not exist or these technologies are not suitable because of their robustness. They are often used in applications like Business Intelligence that enable to obtain useful information for high-quality decision-making and to increase competitive advantage.One of the most widespread tools for the Artificial Intelligence are the artificial neural networks. Their high advantage is relative simplicity and the possibility of self-learning based on set of pattern situations.For the learning phase is the most commonly used algorithm back-propagation error (BPE. The base of BPE is the method minima of error function representing the sum of squared errors on outputs of neural net, for all patterns of the learning set. However, while performing BPE and in the first usage, we can find out that it is necessary to complete the handling of the learning factor by suitable method. The stability of the learning process and the rate of convergence depend on the selected method. In the article there are derived two functions: one function for the learning process management by the relative great error function value and the second function when the value of error function approximates to global minimum.The aim of the article is to introduce the BPE algorithm in compact matrix form for multilayer neural networks, the derivation of the learning factor handling method and the presentation of the results.

  4. Windowed active sampling for reliable neural learning

    NARCIS (Netherlands)

    Barakova, E.I; Spaanenburg, L

    The composition of the example set has a major impact on the quality of neural learning. The popular approach is focused on extensive pre-processing to bridge the representation gap between process measurement and neural presentation. In contrast, windowed active sampling attempts to solve these

  5. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Zenke, Friedemann; Ganguli, Surya

    2018-04-13

    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

  6. Learning-parameter adjustment in neural networks

    Science.gov (United States)

    Heskes, Tom M.; Kappen, Bert

    1992-06-01

    We present a learning-parameter adjustment algorithm, valid for a large class of learning rules in neural-network literature. The algorithm follows directly from a consideration of the statistics of the weights in the network. The characteristic behavior of the algorithm is calculated, both in a fixed and a changing environment. A simple example, Widrow-Hoff learning for statistical classification, serves as an illustration.

  7. Online neural monitoring of statistical learning.

    Science.gov (United States)

    Batterink, Laura J; Paller, Ken A

    2017-05-01

    The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the reaction time (RT) task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  9. Development switch in neural circuitry underlying odor-malaise learning.

    Science.gov (United States)

    Shionoya, Kiseko; Moriceau, Stephanie; Lunday, Lauren; Miner, Cathrine; Roth, Tania L; Sullivan, Regina M

    2006-01-01

    Fetal and infant rats can learn to avoid odors paired with illness before development of brain areas supporting this learning in adults, suggesting an alternate learning circuit. Here we begin to document the transition from the infant to adult neural circuit underlying odor-malaise avoidance learning using LiCl (0.3 M; 1% of body weight, ip) and a 30-min peppermint-odor exposure. Conditioning groups included: Paired odor-LiCl, Paired odor-LiCl-Nursing, LiCl, and odor-saline. Results showed that Paired LiCl-odor conditioning induced a learned odor aversion in postnatal day (PN) 7, 12, and 23 pups. Odor-LiCl Paired Nursing induced a learned odor preference in PN7 and PN12 pups but blocked learning in PN23 pups. 14C 2-deoxyglucose (2-DG) autoradiography indicated enhanced olfactory bulb activity in PN7 and PN12 pups with odor preference and avoidance learning. The odor aversion in weanling aged (PN23) pups resulted in enhanced amygdala activity in Paired odor-LiCl pups, but not if they were nursing. Thus, the neural circuit supporting malaise-induced aversions changes over development, indicating that similar infant and adult-learned behaviors may have distinct neural circuits.

  10. Parameter diagnostics of phases and phase transition learning by neural networks

    Science.gov (United States)

    Suchsland, Philippe; Wessel, Stefan

    2018-05-01

    We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.

  11. Neural modularity helps organisms evolve to learn new skills without forgetting old skills.

    Science.gov (United States)

    Ellefsen, Kai Olav; Mouret, Jean-Baptiste; Clune, Jeff

    2015-04-01

    A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to

  12. Neural modularity helps organisms evolve to learn new skills without forgetting old skills.

    Directory of Open Access Journals (Sweden)

    Kai Olav Ellefsen

    2015-04-01

    Full Text Available A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand. To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1 that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2 that one benefit of the modularity ubiquitous in the brains of natural animals

  13. Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills

    Science.gov (United States)

    Ellefsen, Kai Olav; Mouret, Jean-Baptiste; Clune, Jeff

    2015-01-01

    A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to

  14. Vicarious neural processing of outcomes during observational learning.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on the ability to map the actions of others onto our own, process outcomes, and combine these sources of information. Here, we combined newly developed experimental tasks and functional magnetic resonance imaging (fMRI to investigate the neural mechanisms that govern such observational learning. Results show that the neural systems involved in individual trial-and-error learning and in action observation and execution both participate in observational learning. In addition, we identified brain areas that specifically activate for others' incorrect outcomes during learning in the posterior medial frontal cortex (pMFC, the anterior insula and the posterior superior temporal sulcus (pSTS.

  15. Vicarious neural processing of outcomes during observational learning.

    Science.gov (United States)

    Monfardini, Elisabetta; Gazzola, Valeria; Boussaoud, Driss; Brovelli, Andrea; Keysers, Christian; Wicker, Bruno

    2013-01-01

    Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on the ability to map the actions of others onto our own, process outcomes, and combine these sources of information. Here, we combined newly developed experimental tasks and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms that govern such observational learning. Results show that the neural systems involved in individual trial-and-error learning and in action observation and execution both participate in observational learning. In addition, we identified brain areas that specifically activate for others' incorrect outcomes during learning in the posterior medial frontal cortex (pMFC), the anterior insula and the posterior superior temporal sulcus (pSTS).

  16. Learning and adaptation: neural and behavioural mechanisms behind behaviour change

    Science.gov (United States)

    Lowe, Robert; Sandamirskaya, Yulia

    2018-01-01

    This special issue presents perspectives on learning and adaptation as they apply to a number of cognitive phenomena including pupil dilation in humans and attention in robots, natural language acquisition and production in embodied agents (robots), human-robot game play and social interaction, neural-dynamic modelling of active perception and neural-dynamic modelling of infant development in the Piagetian A-not-B task. The aim of the special issue, through its contributions, is to highlight some of the critical neural-dynamic and behavioural aspects of learning as it grounds adaptive responses in robotic- and neural-dynamic systems.

  17. Comparison between extreme learning machine and wavelet neural networks in data classification

    Science.gov (United States)

    Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2017-03-01

    Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.

  18. Boltzmann learning of parameters in cellular neural networks

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    1992-01-01

    The use of Bayesian methods to design cellular neural networks for signal processing tasks and the Boltzmann machine learning rule for parameter estimation is discussed. The learning rule can be used for models with hidden units, or for completely unsupervised learning. The latter is exemplified...

  19. IGF-II Promotes Stemness of Neural Restricted Precursors

    Science.gov (United States)

    Ziegler, Amber N.; Schneider, Joel S.; Qin, Mei; Tyler, William A.; Pintar, John E.; Fraidenraich, Diego; Wood, Teresa L.; Levison, Steven W.

    2016-01-01

    Insulin-like growth factor (IGF)-I and IGF-II regulate brain development and growth through the IGF type 1 receptor (IGF-1R). Less appreciated is that IGF-II, but not IGF-I, activates a splice variant of the insulin receptor (IR) known as IR-A. We hypothesized that IGF-II exerts distinct effects from IGF-I on neural stem/progenitor cells (NSPs) via its interaction with IR-A. Immunofluorescence revealed high IGF-II in the medial region of the subventricular zone (SVZ) comprising the neural stem cell niche, with IGF-II mRNA predominant in the adjacent choroid plexus. The IGF-1R and the IR isoforms were differentially expressed with IR-A predominant in the medial SVZ, whereas the IGF-1R was more abundant laterally. Similarly, IR-A was more highly expressed by NSPs, whereas the IGF-1R was more highly expressed by lineage restricted cells. In vitro, IGF-II was more potent in promoting NSP expansion than either IGF-I or standard growth medium. Limiting dilution and differentiation assays revealed that IGF-II was superior to IGF-I in promoting stemness. In vivo, NSPs propagated in IGF-II migrated to and took up residence in periventricular niches while IGF-I-treated NSPs predominantly colonized white matter. Knockdown of IR or IGF-1R using shRNAs supported the conclusion that the IGF-1R promotes progenitor proliferation, whereas the IR is important for self-renewal. Q-PCR revealed that IGF-II increased Oct4, Sox1, and FABP7 mRNA levels in NSPs. Our data support the conclusion that IGF-II promotes the self-renewal of neural stem/progenitors via the IR. By contrast, IGF-1R functions as a mitogenic receptor to increase precursor abundance. PMID:22593020

  20. Neural mechanisms of reinforcement learning in unmedicated patients with major depressive disorder.

    Science.gov (United States)

    Rothkirch, Marcus; Tonn, Jonas; Köhler, Stephan; Sterzer, Philipp

    2017-04-01

    According to current concepts, major depressive disorder is strongly related to dysfunctional neural processing of motivational information, entailing impairments in reinforcement learning. While computational modelling can reveal the precise nature of neural learning signals, it has not been used to study learning-related neural dysfunctions in unmedicated patients with major depressive disorder so far. We thus aimed at comparing the neural coding of reward and punishment prediction errors, representing indicators of neural learning-related processes, between unmedicated patients with major depressive disorder and healthy participants. To this end, a group of unmedicated patients with major depressive disorder (n = 28) and a group of age- and sex-matched healthy control participants (n = 30) completed an instrumental learning task involving monetary gains and losses during functional magnetic resonance imaging. The two groups did not differ in their learning performance. Patients and control participants showed the same level of prediction error-related activity in the ventral striatum and the anterior insula. In contrast, neural coding of reward prediction errors in the medial orbitofrontal cortex was reduced in patients. Moreover, neural reward prediction error signals in the medial orbitofrontal cortex and ventral striatum showed negative correlations with anhedonia severity. Using a standard instrumental learning paradigm we found no evidence for an overall impairment of reinforcement learning in medication-free patients with major depressive disorder. Importantly, however, the attenuated neural coding of reward in the medial orbitofrontal cortex and the relation between anhedonia and reduced reward prediction error-signalling in the medial orbitofrontal cortex and ventral striatum likely reflect an impairment in experiencing pleasure from rewarding events as a key mechanism of anhedonia in major depressive disorder. © The Author (2017). Published by Oxford

  1. Fastest learning in small-world neural networks

    International Nuclear Information System (INIS)

    Simard, D.; Nadeau, L.; Kroeger, H.

    2005-01-01

    We investigate supervised learning in neural networks. We consider a multi-layered feed-forward network with back propagation. We find that the network of small-world connectivity reduces the learning error and learning time when compared to the networks of regular or random connectivity. Our study has potential applications in the domain of data-mining, image processing, speech recognition, and pattern recognition

  2. Do neural nets learn statistical laws behind natural language?

    Directory of Open Access Journals (Sweden)

    Shuntaro Takahashi

    Full Text Available The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.

  3. Learning in neural networks based on a generalized fluctuation theorem

    Science.gov (United States)

    Hayakawa, Takashi; Aoyagi, Toshio

    2015-11-01

    Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universal laws describing the fluctuation they exhibit and the information they possess have recently been discovered. These laws are termed fluctuation theorems. In the present study, we formulate a theory of learning in neural networks bidirectionally interacting with environments based on the principle of information maximization. Our formulation begins with the introduction of a generalized fluctuation theorem, employing an interpretation appropriate for the present application, which differs from the original thermodynamic interpretation. We analytically and numerically demonstrate that the learning mechanism presented in our theory allows neural networks to efficiently explore their environments and optimally encode information about them.

  4. Evolving Neural Turing Machines for Reward-based Learning

    DEFF Research Database (Denmark)

    Greve, Rasmus Boll; Jacobsen, Emil Juul; Risi, Sebastian

    2016-01-01

    An unsolved problem in neuroevolution (NE) is to evolve artificial neural networks (ANN) that can store and use information to change their behavior online. While plastic neural networks have shown promise in this context, they have difficulties retaining information over longer periods of time...... version of the double T-Maze, a complex reinforcement-like learning problem. In the T-Maze learning task the agent uses the memory bank to display adaptive behavior that normally requires a plastic ANN, thereby suggesting a complementary and effective mechanism for adaptive behavior in NE....

  5. Finite time convergent learning law for continuous neural networks.

    Science.gov (United States)

    Chairez, Isaac

    2014-02-01

    This paper addresses the design of a discontinuous finite time convergent learning law for neural networks with continuous dynamics. The neural network was used here to obtain a non-parametric model for uncertain systems described by a set of ordinary differential equations. The source of uncertainties was the presence of some external perturbations and poor knowledge of the nonlinear function describing the system dynamics. A new adaptive algorithm based on discontinuous algorithms was used to adjust the weights of the neural network. The adaptive algorithm was derived by means of a non-standard Lyapunov function that is lower semi-continuous and differentiable in almost the whole space. A compensator term was included in the identifier to reject some specific perturbations using a nonlinear robust algorithm. Two numerical examples demonstrated the improvements achieved by the learning algorithm introduced in this paper compared to classical schemes with continuous learning methods. The first one dealt with a benchmark problem used in the paper to explain how the discontinuous learning law works. The second one used the methane production model to show the benefits in engineering applications of the learning law proposed in this paper. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Neural correlates of contextual cueing are modulated by explicit learning.

    Science.gov (United States)

    Westerberg, Carmen E; Miller, Brennan B; Reber, Paul J; Cohen, Neal J; Paller, Ken A

    2011-10-01

    Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer's knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Neural Monkey: An Open-source Tool for Sequence Learning

    Directory of Open Access Journals (Sweden)

    Helcl Jindřich

    2017-04-01

    Full Text Available In this paper, we announce the development of Neural Monkey – an open-source neural machine translation (NMT and general sequence-to-sequence learning system built over the TensorFlow machine learning library. The system provides a high-level API tailored for fast prototyping of complex architectures with multiple sequence encoders and decoders. Models’ overall architecture is specified in easy-to-read configuration files. The long-term goal of the Neural Monkey project is to create and maintain a growing collection of implementations of recently proposed components or methods, and therefore it is designed to be easily extensible. Trained models can be deployed either for batch data processing or as a web service. In the presented paper, we describe the design of the system and introduce the reader to running experiments using Neural Monkey.

  8. Neural regeneration protein is a novel chemoattractive and neuronal survival-promoting factor

    International Nuclear Information System (INIS)

    Gorba, Thorsten; Bradoo, Privahini; Antonic, Ana; Marvin, Keith; Liu, Dong-Xu; Lobie, Peter E.; Reymann, Klaus G.; Gluckman, Peter D.; Sieg, Frank

    2006-01-01

    Neurogenesis and neuronal migration are the prerequisites for the development of the central nervous system. We have identified a novel rodent gene encoding for a neural regeneration protein (NRP) with an activity spectrum similar to the chemokine stromal-derived factor (SDF)-1, but with much greater potency. The Nrp gene is encoded as a forward frameshift to the hypothetical alkylated DNA repair protein AlkB. The predicted protein sequence of NRP contains domains with homology to survival-promoting peptide (SPP) and the trefoil protein TFF-1. The Nrp gene is first expressed in neural stem cells and expression continues in glial lineages. Recombinant NRP and NRP-derived peptides possess biological activities including induction of neural migration and proliferation, promotion of neuronal survival, enhancement of neurite outgrowth and promotion of neuronal differentiation from neural stem cells. NRP exerts its effect on neuronal survival by phosphorylation of the ERK1/2 and Akt kinases, whereas NRP stimulation of neural migration depends solely on p44/42 MAP kinase activity. Taken together, the expression profile of Nrp, the existence in its predicted protein structure of domains with similarities to known neuroprotective and migration-inducing factors and the high potency of NRP-derived synthetic peptides acting in femtomolar concentrations suggest it to be a novel gene of relevance in cellular and developmental neurobiology

  9. Genetic learning in rule-based and neural systems

    Science.gov (United States)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  10. Exploring the spatio-temporal neural basis of face learning

    Science.gov (United States)

    Yang, Ying; Xu, Yang; Jew, Carol A.; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.

    2017-01-01

    Humans are experts at face individuation. Although previous work has identified a network of face-sensitive regions and some of the temporal signatures of face processing, as yet, we do not have a clear understanding of how such face-sensitive regions support learning at different time points. To study the joint spatio-temporal neural basis of face learning, we trained subjects to categorize two groups of novel faces and recorded their neural responses using magnetoencephalography (MEG) throughout learning. A regression analysis of neural responses in face-sensitive regions against behavioral learning curves revealed significant correlations with learning in the majority of the face-sensitive regions in the face network, mostly between 150–250 ms, but also after 300 ms. However, the effect was smaller in nonventral regions (within the superior temporal areas and prefrontal cortex) than that in the ventral regions (within the inferior occipital gyri (IOG), midfusiform gyri (mFUS) and anterior temporal lobes). A multivariate discriminant analysis also revealed that IOG and mFUS, which showed strong correlation effects with learning, exhibited significant discriminability between the two face categories at different time points both between 150–250 ms and after 300 ms. In contrast, the nonventral face-sensitive regions, where correlation effects with learning were smaller, did exhibit some significant discriminability, but mainly after 300 ms. In sum, our findings indicate that early and recurring temporal components arising from ventral face-sensitive regions are critically involved in learning new faces. PMID:28570739

  11. Spaced Learning Enhances Subsequent Recognition Memory by Reducing Neural Repetition Suppression

    Science.gov (United States)

    Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi

    2011-01-01

    Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half…

  12. Thermodynamic efficiency of learning a rule in neural networks

    Science.gov (United States)

    Goldt, Sebastian; Seifert, Udo

    2017-11-01

    Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.

  13. Random neural Q-learning for obstacle avoidance of a mobile robot in unknown environments

    Directory of Open Access Journals (Sweden)

    Jing Yang

    2016-07-01

    Full Text Available The article presents a random neural Q-learning strategy for the obstacle avoidance problem of an autonomous mobile robot in unknown environments. In the proposed strategy, two independent modules, namely, avoidance without considering the target and goal-seeking without considering obstacles, are first trained using the proposed random neural Q-learning algorithm to obtain their best control policies. Then, the two trained modules are combined based on a switching function to realize the obstacle avoidance in unknown environments. For the proposed random neural Q-learning algorithm, a single-hidden layer feedforward network is used to approximate the Q-function to estimate the Q-value. The parameters of the single-hidden layer feedforward network are modified using the recently proposed neural algorithm named the online sequential version of extreme learning machine, where the parameters of the hidden nodes are assigned randomly and the sample data can come one by one. However, different from the original online sequential version of extreme learning machine algorithm, the initial output weights are estimated subjected to quadratic inequality constraint to improve the convergence speed. Finally, the simulation results demonstrate that the proposed random neural Q-learning strategy can successfully solve the obstacle avoidance problem. Also, the higher learning efficiency and better generalization ability are achieved by the proposed random neural Q-learning algorithm compared with the Q-learning based on the back-propagation method.

  14. Three-Dimensional-Bioprinted Dopamine-Based Matrix for Promoting Neural Regeneration.

    Science.gov (United States)

    Zhou, Xuan; Cui, Haitao; Nowicki, Margaret; Miao, Shida; Lee, Se-Jun; Masood, Fahed; Harris, Brent T; Zhang, Lijie Grace

    2018-03-14

    Central nerve repair and regeneration remain challenging problems worldwide, largely because of the extremely weak inherent regenerative capacity and accompanying fibrosis of native nerves. Inadequate solutions to the unmet needs for clinical therapeutics encourage the development of novel strategies to promote nerve regeneration. Recently, 3D bioprinting techniques, as one of a set of valuable tissue engineering technologies, have shown great promise toward fabricating complex and customizable artificial tissue scaffolds. Gelatin methacrylate (GelMA) possesses excellent biocompatible and biodegradable properties because it contains many arginine-glycine-aspartic acids (RGD) and matrix metalloproteinase sequences. Dopamine (DA), as an essential neurotransmitter, has proven effective in regulating neuronal development and enhancing neurite outgrowth. In this study, GelMA-DA neural scaffolds with hierarchical structures were 3D-fabricated using our custom-designed stereolithography-based printer. DA was functionalized on GelMA to synthesize a biocompatible printable ink (GelMA-DA) for improving neural differentiation. Additionally, neural stem cells (NSCs) were employed as the primary cell source for these scaffolds because of their ability to terminally differentiate into a variety of cell types including neurons, astrocytes, and oligodendrocytes. The resultant GelMA-DA scaffolds exhibited a highly porous and interconnected 3D environment, which is favorable for supporting NSC growth. Confocal microscopy analysis of neural differentiation demonstrated that a distinct neural network was formed on the GelMA-DA scaffolds. In particular, the most significant improvements were the enhanced neuron gene expression of TUJ1 and MAP2. Overall, our results demonstrated that 3D-printed customizable GelMA-DA scaffolds have a positive role in promoting neural differentiation, which is promising for advancing nerve repair and regeneration in the future.

  15. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  16. Maximum entropy methods for extracting the learned features of deep neural networks.

    Science.gov (United States)

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.

  17. Dopamine prediction errors in reward learning and addiction: from theory to neural circuitry

    Science.gov (United States)

    Keiflin, Ronald; Janak, Patricia H.

    2015-01-01

    Summary Midbrain dopamine (DA) neurons are proposed to signal reward prediction error (RPE), a fundamental parameter in associative learning models. This RPE hypothesis provides a compelling theoretical framework for understanding DA function in reward learning and addiction. New studies support a causal role for DA-mediated RPE activity in promoting learning about natural reward; however, this question has not been explicitly tested in the context of drug addiction. In this review, we integrate theoretical models with experimental findings on the activity of DA systems, and on the causal role of specific neuronal projections and cell types, to provide a circuit-based framework for probing DA-RPE function in addiction. By examining error-encoding DA neurons in the neural network in which they are embedded, hypotheses regarding circuit-level adaptations that possibly contribute to pathological error-signaling and addiction can be formulated and tested. PMID:26494275

  18. Dopamine Prediction Errors in Reward Learning and Addiction: From Theory to Neural Circuitry.

    Science.gov (United States)

    Keiflin, Ronald; Janak, Patricia H

    2015-10-21

    Midbrain dopamine (DA) neurons are proposed to signal reward prediction error (RPE), a fundamental parameter in associative learning models. This RPE hypothesis provides a compelling theoretical framework for understanding DA function in reward learning and addiction. New studies support a causal role for DA-mediated RPE activity in promoting learning about natural reward; however, this question has not been explicitly tested in the context of drug addiction. In this review, we integrate theoretical models with experimental findings on the activity of DA systems, and on the causal role of specific neuronal projections and cell types, to provide a circuit-based framework for probing DA-RPE function in addiction. By examining error-encoding DA neurons in the neural network in which they are embedded, hypotheses regarding circuit-level adaptations that possibly contribute to pathological error signaling and addiction can be formulated and tested. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Learning drifting concepts with neural networks

    NARCIS (Netherlands)

    Biehl, Michael; Schwarze, Holm

    1993-01-01

    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using

  20. Temporal-pattern learning in neural models

    CERN Document Server

    Genís, Carme Torras

    1985-01-01

    While the ability of animals to learn rhythms is an unquestionable fact, the underlying neurophysiological mechanisms are still no more than conjectures. This monograph explores the requirements of such mechanisms, reviews those previously proposed and postulates a new one based on a direct electric coding of stimulation frequencies. Experi­ mental support for the option taken is provided both at the single neuron and neural network levels. More specifically, the material presented divides naturally into four parts: a description of the experimental and theoretical framework where this work becomes meaningful (Chapter 2), a detailed specifica­ tion of the pacemaker neuron model proposed together with its valida­ tion through simulation (Chapter 3), an analytic study of the behavior of this model when submitted to rhythmic stimulation (Chapter 4) and a description of the neural network model proposed for learning, together with an analysis of the simulation results obtained when varying seve­ ral factors r...

  1. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  2. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  3. Learning language with the wrong neural scaffolding: The cost of neural commitment to sounds.

    Directory of Open Access Journals (Sweden)

    Amy Sue Finn

    2013-11-01

    Full Text Available Does tuning to one’s native language explain the sensitive period for language learning? We explore the idea that tuning to (or becoming more selective for the properties of one’s native-language could result in being less open (or plastic for tuning to the properties of a new language. To explore how this might lead to the sensitive period for grammar learning, we ask if tuning to an earlier-learned aspect of language (sound structure has an impact on the neural representation of a later-learned aspect (grammar. English-speaking adults learned one of two miniature artificial languages over 4 days in the lab. Compared to English, both languages had novel grammar, but only one was comprised of novel sounds. After learning a language, participants were scanned while judging the grammaticality of sentences. Judgments were performed for the newly learned language and English. Learners of the similar-sounds language recruited regions that overlapped more with English. Learners of the distinct-sounds language, however, recruited the Superior Temporal Gyrus (STG to a greater extent, which was coactive with the Inferior Frontal Gyrus (IFG. Across learners, recruitment of IFG (but not STG predicted both learning success in tests conducted prior to the scan and grammatical judgment ability during the scan. Data suggest that adults’ difficulty learning language, especially grammar, could be due, at least in part, to the neural commitments they have made to the lower level linguistic components of their native language.

  4. Learning language with the wrong neural scaffolding: the cost of neural commitment to sounds

    Science.gov (United States)

    Finn, Amy S.; Hudson Kam, Carla L.; Ettlinger, Marc; Vytlacil, Jason; D'Esposito, Mark

    2013-01-01

    Does tuning to one's native language explain the “sensitive period” for language learning? We explore the idea that tuning to (or becoming more selective for) the properties of one's native-language could result in being less open (or plastic) for tuning to the properties of a new language. To explore how this might lead to the sensitive period for grammar learning, we ask if tuning to an earlier-learned aspect of language (sound structure) has an impact on the neural representation of a later-learned aspect (grammar). English-speaking adults learned one of two miniature artificial languages (MALs) over 4 days in the lab. Compared to English, both languages had novel grammar, but only one was comprised of novel sounds. After learning a language, participants were scanned while judging the grammaticality of sentences. Judgments were performed for the newly learned language and English. Learners of the similar-sounds language recruited regions that overlapped more with English. Learners of the distinct-sounds language, however, recruited the Superior Temporal Gyrus (STG) to a greater extent, which was coactive with the Inferior Frontal Gyrus (IFG). Across learners, recruitment of IFG (but not STG) predicted both learning success in tests conducted prior to the scan and grammatical judgment ability during the scan. Data suggest that adults' difficulty learning language, especially grammar, could be due, at least in part, to the neural commitments they have made to the lower level linguistic components of their native language. PMID:24273497

  5. Collegewide Promotion of E-Learning/Active Learning and Faculty Development

    Science.gov (United States)

    Ogawa, Nobuyuki; Shimizu, Akira

    2016-01-01

    Japanese National Institutes of Technology have revealed a plan to strongly promote e-Learning and active learning under the common schematization of education in over 50 campuses nationwide. Our e-Learning and ICT-driven education practiced for more than fifteen years were highly evaluated, and is playing a leading role in promoting e-Learning…

  6. Deep Learning Neural Networks in Cybersecurity - Managing Malware with AI

    OpenAIRE

    Rayle, Keith

    2017-01-01

    There’s a lot of talk about the benefits of deep learning (neural networks) and how it’s the new electricity that will power us into the future. Medical diagnosis, computer vision and speech recognition are all examples of use-cases where neural networks are being applied in our everyday business environment. This begs the question…what are the uses of neural-network applications for cyber security? How does the AI process work when applying neural networks to detect malicious software bombar...

  7. Consensus-based distributed cooperative learning from closed-loop neural control systems.

    Science.gov (United States)

    Chen, Weisheng; Hua, Shaoyong; Zhang, Huaguang

    2015-02-01

    In this paper, the neural tracking problem is addressed for a group of uncertain nonlinear systems where the system structures are identical but the reference signals are different. This paper focuses on studying the learning capability of neural networks (NNs) during the control process. First, we propose a novel control scheme called distributed cooperative learning (DCL) control scheme, by establishing the communication topology among adaptive laws of NN weights to share their learned knowledge online. It is further proved that if the communication topology is undirected and connected, all estimated weights of NNs can converge to small neighborhoods around their optimal values over a domain consisting of the union of all state orbits. Second, as a corollary it is shown that the conclusion on the deterministic learning still holds in the decentralized adaptive neural control scheme where, however, the estimated weights of NNs just converge to small neighborhoods of the optimal values along their own state orbits. Thus, the learned controllers obtained by DCL scheme have the better generalization capability than ones obtained by decentralized learning method. A simulation example is provided to verify the effectiveness and advantages of the control schemes proposed in this paper.

  8. Neural-Fitted TD-Leaf Learning for Playing Othello With Structured Neural Networks

    NARCIS (Netherlands)

    van den Dries, Sjoerd; Wiering, Marco A.

    This paper describes a methodology for quickly learning to play games at a strong level. The methodology consists of a novel combination of three techniques, and a variety of experiments on the game of Othello demonstrates their usefulness. First, structures or topologies in neural network

  9. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    Science.gov (United States)

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Learning sequential control in a Neural Blackboard Architecture for in situ concept reasoning

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, Frank; Besold, Tarek R.; Lamb, Luis; Serafini, Luciano; Tabor, Whitney

    2016-01-01

    Simulations are presented and discussed of learning sequential control in a Neural Blackboard Architecture (NBA) for in situ concept-based reasoning. Sequential control is learned in a reservoir network, consisting of columns with neural circuits. This allows the reservoir to control the dynamics of

  11. Learning-induced pattern classification in a chaotic neural network

    International Nuclear Information System (INIS)

    Li, Yang; Zhu, Ping; Xie, Xiaoping; He, Guoguang; Aihara, Kazuyuki

    2012-01-01

    In this Letter, we propose a Hebbian learning rule with passive forgetting (HLRPF) for use in a chaotic neural network (CNN). We then define the indices based on the Euclidean distance to investigate the evolution of the weights in a simplified way. Numerical simulations demonstrate that, under suitable external stimulations, the CNN with the proposed HLRPF acts as a fuzzy-like pattern classifier that performs much better than an ordinary CNN. The results imply relationship between learning and recognition. -- Highlights: ► Proposing a Hebbian learning rule with passive forgetting (HLRPF). ► Defining indices to investigate the evolution of the weights simply. ► The chaotic neural network with HLRPF acts as a fuzzy-like pattern classifier. ► The pattern classifier ability of the network is improved much.

  12. Bio-Inspired Neural Model for Learning Dynamic Models

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  13. Forecasting crude oil price with an EMD-based neural network ensemble learning paradigm

    International Nuclear Information System (INIS)

    Yu, Lean; Wang, Shouyang; Lai, Kin Keung

    2008-01-01

    In this study, an empirical mode decomposition (EMD) based neural network ensemble learning paradigm is proposed for world crude oil spot price forecasting. For this purpose, the original crude oil spot price series were first decomposed into a finite, and often small, number of intrinsic mode functions (IMFs). Then a three-layer feed-forward neural network (FNN) model was used to model each of the extracted IMFs, so that the tendencies of these IMFs could be accurately predicted. Finally, the prediction results of all IMFs are combined with an adaptive linear neural network (ALNN), to formulate an ensemble output for the original crude oil price series. For verification and testing, two main crude oil price series, West Texas Intermediate (WTI) crude oil spot price and Brent crude oil spot price, are used to test the effectiveness of the proposed EMD-based neural network ensemble learning methodology. Empirical results obtained demonstrate attractiveness of the proposed EMD-based neural network ensemble learning paradigm. (author)

  14. Learning and coding in biological neural networks

    Science.gov (United States)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  15. Hypothetical Pattern Recognition Design Using Multi-Layer Perceptorn Neural Network For Supervised Learning

    Directory of Open Access Journals (Sweden)

    Md. Abdullah-al-mamun

    2015-08-01

    Full Text Available Abstract Humans are capable to identifying diverse shape in the different pattern in the real world as effortless fashion due to their intelligence is grow since born with facing several learning process. Same way we can prepared an machine using human like brain called Artificial Neural Network that can be recognize different pattern from the real world object. Although the various techniques is exists to implementation the pattern recognition but recently the artificial neural network approaches have been giving the significant attention. Because the approached of artificial neural network is like a human brain that is learn from different observation and give a decision the previously learning rule. Over the 50 years research now a days pattern recognition for machine learning using artificial neural network got a significant achievement. For this reason many real world problem can be solve by modeling the pattern recognition process. The objective of this paper is to present the theoretical concept for pattern recognition design using Multi-Layer Perceptorn neural networkin the algorithm of artificial Intelligence as the best possible way of utilizing available resources to make a decision that can be a human like performance.

  16. Deep learning classification in asteroseismology using an improved neural network

    DEFF Research Database (Denmark)

    Hon, Marc; Stello, Dennis; Yu, Jie

    2018-01-01

    Deep learning in the form of 1D convolutional neural networks have previously been shown to be capable of efficiently classifying the evolutionary state of oscillating red giants into red giant branch stars and helium-core burning stars by recognizing visual features in their asteroseismic...... frequency spectra. We elaborate further on the deep learning method by developing an improved convolutional neural network classifier. To make our method useful for current and future space missions such as K2, TESS, and PLATO, we train classifiers that are able to classify the evolutionary states of lower...

  17. Biologically-inspired Learning in Pulsed Neural Networks

    DEFF Research Database (Denmark)

    Lehmann, Torsten; Woodburn, Robin

    1999-01-01

    Self-learning chips to implement many popular ANN (artificial neural network) algorithms are very difficult to design. We explain why this is so and say what lessons previous work teaches us in the design of self-learning systems. We offer a contribution to the `biologically-inspired' approach......, explaining what we mean by this term and providing an example of a robust, self-learning design that can solve simple classical-conditioning tasks. We give details of the design of individual circuits to perform component functions, which can then be combined into a network to solve the task. We argue...

  18. Sensorimotor learning biases choice behavior: a learning neural field model for decision making.

    Directory of Open Access Journals (Sweden)

    Christian Klaes

    Full Text Available According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action

  19. Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies

    Science.gov (United States)

    Peng, Yefei

    2010-01-01

    An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…

  20. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    Science.gov (United States)

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  1. Lifelong learning of human actions with deep neural network self-organization.

    Science.gov (United States)

    Parisi, German I; Tani, Jun; Weber, Cornelius; Wermter, Stefan

    2017-12-01

    Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  2. Neural Control of a Tracking Task via Attention-Gated Reinforcement Learning for Brain-Machine Interfaces.

    Science.gov (United States)

    Wang, Yiwen; Wang, Fang; Xu, Kai; Zhang, Qiaosheng; Zhang, Shaomin; Zheng, Xiaoxiang

    2015-05-01

    Reinforcement learning (RL)-based brain machine interfaces (BMIs) enable the user to learn from the environment through interactions to complete the task without desired signals, which is promising for clinical applications. Previous studies exploited Q-learning techniques to discriminate neural states into simple directional actions providing the trial initial timing. However, the movements in BMI applications can be quite complicated, and the action timing explicitly shows the intention when to move. The rich actions and the corresponding neural states form a large state-action space, imposing generalization difficulty on Q-learning. In this paper, we propose to adopt attention-gated reinforcement learning (AGREL) as a new learning scheme for BMIs to adaptively decode high-dimensional neural activities into seven distinct movements (directional moves, holdings and resting) due to the efficient weight-updating. We apply AGREL on neural data recorded from M1 of a monkey to directly predict a seven-action set in a time sequence to reconstruct the trajectory of a center-out task. Compared to Q-learning techniques, AGREL could improve the target acquisition rate to 90.16% in average with faster convergence and more stability to follow neural activity over multiple days, indicating the potential to achieve better online decoding performance for more complicated BMI tasks.

  3. Learning Perfectly Secure Cryptography to Protect Communications with Adversarial Neural Cryptography

    Directory of Open Access Journals (Sweden)

    Murilo Coutinho

    2018-04-01

    Full Text Available Researches in Artificial Intelligence (AI have achieved many important breakthroughs, especially in recent years. In some cases, AI learns alone from scratch and performs human tasks faster and better than humans. With the recent advances in AI, it is natural to wonder whether Artificial Neural Networks will be used to successfully create or break cryptographic algorithms. Bibliographic review shows the main approach to this problem have been addressed throughout complex Neural Networks, but without understanding or proving the security of the generated model. This paper presents an analysis of the security of cryptographic algorithms generated by a new technique called Adversarial Neural Cryptography (ANC. Using the proposed network, we show limitations and directions to improve the current approach of ANC. Training the proposed Artificial Neural Network with the improved model of ANC, we show that artificially intelligent agents can learn the unbreakable One-Time Pad (OTP algorithm, without human knowledge, to communicate securely through an insecure communication channel. This paper shows in which conditions an AI agent can learn a secure encryption scheme. However, it also shows that, without a stronger adversary, it is more likely to obtain an insecure one.

  4. Learning Perfectly Secure Cryptography to Protect Communications with Adversarial Neural Cryptography.

    Science.gov (United States)

    Coutinho, Murilo; de Oliveira Albuquerque, Robson; Borges, Fábio; García Villalba, Luis Javier; Kim, Tai-Hoon

    2018-04-24

    Researches in Artificial Intelligence (AI) have achieved many important breakthroughs, especially in recent years. In some cases, AI learns alone from scratch and performs human tasks faster and better than humans. With the recent advances in AI, it is natural to wonder whether Artificial Neural Networks will be used to successfully create or break cryptographic algorithms. Bibliographic review shows the main approach to this problem have been addressed throughout complex Neural Networks, but without understanding or proving the security of the generated model. This paper presents an analysis of the security of cryptographic algorithms generated by a new technique called Adversarial Neural Cryptography (ANC). Using the proposed network, we show limitations and directions to improve the current approach of ANC. Training the proposed Artificial Neural Network with the improved model of ANC, we show that artificially intelligent agents can learn the unbreakable One-Time Pad (OTP) algorithm, without human knowledge, to communicate securely through an insecure communication channel. This paper shows in which conditions an AI agent can learn a secure encryption scheme. However, it also shows that, without a stronger adversary, it is more likely to obtain an insecure one.

  5. Introduction to spiking neural networks: Information processing, learning and applications.

    Science.gov (United States)

    Ponulak, Filip; Kasinski, Andrzej

    2011-01-01

    The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.

  6. All-trans retinoic acid promotes neural lineage entry by pluripotent embryonic stem cells via multiple pathways

    Directory of Open Access Journals (Sweden)

    Fang Bo

    2009-07-01

    Full Text Available Abstract Background All-trans retinoic acid (RA is one of the most important morphogens with pleiotropic actions. Its embryonic distribution correlates with neural differentiation in the developing central nervous system. To explore the precise effects of RA on neural differentiation of mouse embryonic stem cells (ESCs, we detected expression of RA nuclear receptors and RA-metabolizing enzymes in mouse ESCs and investigated the roles of RA in adherent monolayer culture. Results Upon addition of RA, cell differentiation was directed rapidly and exclusively into the neural lineage. Conversely, pharmacological interference with RA signaling suppressed this neural differentiation. Inhibition of fibroblast growth factor (FGF signaling did not suppress significantly neural differentiation in RA-treated cultures. Pharmacological interference with extracellular signal-regulated kinase (ERK pathway or activation of Wnt pathway effectively blocked the RA-promoted neural specification. ERK phosphorylation was enhanced in RA-treated cultures at the early stage of differentiation. Conclusion RA can promote neural lineage entry by ESCs in adherent monolayer culture systems. This effect depends on RA signaling and its crosstalk with the ERK and Wnt pathways.

  7. A learning algorithm for oscillatory cellular neural networks.

    Science.gov (United States)

    Ho, C Y.; Kurokawa, H

    1999-07-01

    We present a cellular type oscillatory neural network for temporal segregation of stationary input patterns. The model comprises an array of locally connected neural oscillators with connections limited to a 4-connected neighborhood. The architecture is reminiscent of the well-known cellular neural network that consists of local connection for feature extraction. By means of a novel learning rule and an initialization scheme, global synchronization can be accomplished without incurring any erroneous synchrony among uncorrelated objects. Each oscillator comprises two mutually coupled neurons, and neurons share a piecewise-linear activation function characteristic. The dynamics of traditional oscillatory models is simplified by using only one plastic synapse, and the overall complexity for hardware implementation is reduced. Based on the connectedness of image segments, it is shown that global synchronization and desynchronization can be achieved by means of locally connected synapses, and this opens up a tremendous application potential for the proposed architecture. Furthermore, by using special grouping synapses it is demonstrated that temporal segregation of overlapping gray-level and color segments can also be achieved. Finally, simulation results show that the learning rule proposed circumvents the problem of component mismatches, and hence facilitates a large-scale integration.

  8. Learning and forgetting on asymmetric, diluted neural networks

    International Nuclear Information System (INIS)

    Derrida, B.; Nadal, J.P.

    1987-01-01

    It is possible to construct diluted asymmetric models of neural networks for which the dynamics can be calculated exactly. The authors test several learning schemes, in particular, models for which the values of the synapses remain bounded and depend on the history. Our analytical results on the relative efficiencies of the various learning schemes are qualitatively similar to the corresponding ones obtained numerically on fully connected symmetric networks

  9. Strategies influence neural activity for feedback learning across child and adolescent development.

    Science.gov (United States)

    Peters, Sabine; Koolschijn, P Cédric M P; Crone, Eveline A; Van Duijvenvoorde, Anna C K; Raijmakers, Maartje E J

    2014-09-01

    Learning from feedback is an important aspect of executive functioning that shows profound improvements during childhood and adolescence. This is accompanied by neural changes in the feedback-learning network, which includes pre-supplementary motor area (pre- SMA)/anterior cingulate cortex (ACC), dorsolateral prefrontal cortex (DLPFC), superior parietal cortex (SPC), and the basal ganglia. However, there can be considerable differences within age ranges in performance that are ascribed to differences in strategy use. This is problematic for traditional approaches of analyzing developmental data, in which age groups are assumed to be homogenous in strategy use. In this study, we used latent variable models to investigate if underlying strategy groups could be detected for a feedback-learning task and whether there were differences in neural activation patterns between strategies. In a sample of 268 participants between ages 8 to 25 years, we observed four underlying strategy groups, which were cut across age groups and varied in the optimality of executive functioning. These strategy groups also differed in neural activity during learning; especially the most optimal performing group showed more activity in DLPFC, SPC and pre-SMA/ACC compared to the other groups. However, age differences remained an important contributor to neural activation, even when correcting for strategy. These findings contribute to the debate of age versus performance predictors of neural development, and highlight the importance of studying individual differences in strategy use when studying development. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. MyT1 Counteracts the Neural Progenitor Program to Promote Vertebrate Neurogenesis

    Directory of Open Access Journals (Sweden)

    Francisca F. Vasconcelos

    2016-10-01

    Full Text Available The generation of neurons from neural stem cells requires large-scale changes in gene expression that are controlled to a large extent by proneural transcription factors, such as Ascl1. While recent studies have characterized the differentiation genes activated by proneural factors, less is known on the mechanisms that suppress progenitor cell identity. Here, we show that Ascl1 induces the transcription factor MyT1 while promoting neuronal differentiation. We combined functional studies of MyT1 during neurogenesis with the characterization of its transcriptional program. MyT1 binding is associated with repression of gene transcription in neural progenitor cells. It promotes neuronal differentiation by counteracting the inhibitory activity of Notch signaling at multiple levels, targeting the Notch1 receptor and many of its downstream targets. These include regulators of the neural progenitor program, such as Hes1, Sox2, Id3, and Olig1. Thus, Ascl1 suppresses Notch signaling cell-autonomously via MyT1, coupling neuronal differentiation with repression of the progenitor fate.

  11. Improved Discriminability of Spatiotemporal Neural Patterns in Rat Motor Cortical Areas as Directional Choice Learning Progresses

    Directory of Open Access Journals (Sweden)

    Hongwei eMao

    2015-03-01

    Full Text Available Animals learn to choose a proper action among alternatives to improve their odds of success in food foraging and other activities critical for survival. Through trial-and-error, they learn correct associations between their choices and external stimuli. While a neural network that underlies such learning process has been identified at a high level, it is still unclear how individual neurons and a neural ensemble adapt as learning progresses. In this study, we monitored the activity of single units in the rat medial and lateral agranular (AGm and AGl, respectively areas as rats learned to make a left or right side lever press in response to a left or right side light cue. We noticed that rat movement parameters during the performance of the directional choice task quickly became stereotyped during the first 2-3 days or sessions. But learning the directional choice problem took weeks to occur. Accompanying rats’ behavioral performance adaptation, we observed neural modulation by directional choice in recorded single units. Our analysis shows that ensemble mean firing rates in the cue-on period did not change significantly as learning progressed, and the ensemble mean rate difference between left and right side choices did not show a clear trend of change either. However, the spatiotemporal firing patterns of the neural ensemble exhibited improved discriminability between the two directional choices through learning. These results suggest a spatiotemporal neural coding scheme in a motor cortical neural ensemble that may be responsible for and contributing to learning the directional choice task.

  12. Adaptive competitive learning neural networks

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2013-11-01

    Full Text Available In this paper, the adaptive competitive learning (ACL neural network algorithm is proposed. This neural network not only groups similar input feature vectors together but also determines the appropriate number of groups of these vectors. This algorithm uses a new proposed criterion referred to as the ACL criterion. This criterion evaluates different clustering structures produced by the ACL neural network for an input data set. Then, it selects the best clustering structure and the corresponding network architecture for this data set. The selected structure is composed of the minimum number of clusters that are compact and balanced in their sizes. The selected network architecture is efficient, in terms of its complexity, as it contains the minimum number of neurons. Synaptic weight vectors of these neurons represent well-separated, compact and balanced clusters in the input data set. The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters. Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

  13. Nuclear power plant monitoring using real-time learning neural network

    International Nuclear Information System (INIS)

    Nabeshima, Kunihiko; Tuerkcan, E.; Ciftcioglu, O.

    1994-01-01

    In the present research, artificial neural network (ANN) with real-time adaptive learning is developed for the plant wide monitoring of Borssele Nuclear Power Plant (NPP). Adaptive ANN learning capability is integrated to the monitoring system so that robust and sensitive on-line monitoring is achieved in real-time environment. The major advantages provided by ANN are that system modelling is formed by means of measurement information obtained from a multi-output process system, explicit modelling is not required and the modelling is not restricted to linear systems. Also ANN can respond very fast to anomalous operational conditions. The real-time ANN learning methodology with adaptive real-time monitoring capability is described below for the wide-range and plant-wide data from an operating nuclear power plant. The layered neural network with error backpropagation algorithm for learning has three layers. The network type is auto-associative, inputs and outputs are exactly the same, using 12 plant signals. (author)

  14. A Self-Organizing Incremental Neural Network based on local distribution learning.

    Science.gov (United States)

    Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi

    2016-12-01

    In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A novel Bayesian learning method for information aggregation in modular neural networks

    DEFF Research Database (Denmark)

    Wang, Pan; Xu, Lida; Zhou, Shang-Ming

    2010-01-01

    Modular neural network is a popular neural network model which has many successful applications. In this paper, a sequential Bayesian learning (SBL) is proposed for modular neural networks aiming at efficiently aggregating the outputs of members of the ensemble. The experimental results on eight...... benchmark problems have demonstrated that the proposed method can perform information aggregation efficiently in data modeling....

  16. Neural prediction errors reveal a risk-sensitive reinforcement-learning process in the human brain.

    Science.gov (United States)

    Niv, Yael; Edlund, Jeffrey A; Dayan, Peter; O'Doherty, John P

    2012-01-11

    Humans and animals are exquisitely, though idiosyncratically, sensitive to risk or variance in the outcomes of their actions. Economic, psychological, and neural aspects of this are well studied when information about risk is provided explicitly. However, we must normally learn about outcomes from experience, through trial and error. Traditional models of such reinforcement learning focus on learning about the mean reward value of cues and ignore higher order moments such as variance. We used fMRI to test whether the neural correlates of human reinforcement learning are sensitive to experienced risk. Our analysis focused on anatomically delineated regions of a priori interest in the nucleus accumbens, where blood oxygenation level-dependent (BOLD) signals have been suggested as correlating with quantities derived from reinforcement learning. We first provide unbiased evidence that the raw BOLD signal in these regions corresponds closely to a reward prediction error. We then derive from this signal the learned values of cues that predict rewards of equal mean but different variance and show that these values are indeed modulated by experienced risk. Moreover, a close neurometric-psychometric coupling exists between the fluctuations of the experience-based evaluations of risky options that we measured neurally and the fluctuations in behavioral risk aversion. This suggests that risk sensitivity is integral to human learning, illuminating economic models of choice, neuroscientific models of affective learning, and the workings of the underlying neural mechanisms.

  17. Parallelization of learning problems by artificial neural networks. Application in external radiotherapy

    International Nuclear Information System (INIS)

    Sauget, M.

    2007-12-01

    This research is about the application of neural networks used in the external radiotherapy domain. The goal is to elaborate a new evaluating system for the radiation dose distributions in heterogeneous environments. The al objective of this work is to build a complete tool kit to evaluate the optimal treatment planning. My st research point is about the conception of an incremental learning algorithm. The interest of my work is to combine different optimizations specialized in the function interpolation and to propose a new algorithm allowing to change the neural network architecture during the learning phase. This algorithm allows to minimise the al size of the neural network while keeping a good accuracy. The second part of my research is to parallelize the previous incremental learning algorithm. The goal of that work is to increase the speed of the learning step as well as the size of the learned dataset needed in a clinical case. For that, our incremental learning algorithm presents an original data decomposition with overlapping, together with a fault tolerance mechanism. My last research point is about a fast and accurate algorithm computing the radiation dose deposit in any heterogeneous environment. At the present time, the existing solutions used are not optimal. The fast solution are not accurate and do not give an optimal treatment planning. On the other hand, the accurate solutions are far too slow to be used in a clinical context. Our algorithm answers to this problem by bringing rapidity and accuracy. The concept is to use a neural network adequately learned together with a mechanism taking into account the environment changes. The advantages of this algorithm is to avoid the use of a complex physical code while keeping a good accuracy and reasonable computation times. (author)

  18. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Neural Correlates of High Performance in Foreign Language Vocabulary Learning

    Science.gov (United States)

    Macedonia, Manuela; Muller, Karsten; Friederici, Angela D.

    2010-01-01

    Learning vocabulary in a foreign language is a laborious task which people perform with varying levels of success. Here, we investigated the neural underpinning of high performance on this task. In a within-subjects paradigm, participants learned 92 vocabulary items under two multimodal conditions: one condition paired novel words with iconic…

  20. Learning characteristics of a space-time neural network as a tether skiprope observer

    Science.gov (United States)

    Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles

    1993-01-01

    The Software Technology Laboratory at the Johnson Space Center is testing a Space Time Neural Network (STNN) for observing tether oscillations present during retrieval of a tethered satellite. Proper identification of tether oscillations, known as 'skiprope' motion, is vital to safe retrieval of the tethered satellite. Our studies indicate that STNN has certain learning characteristics that must be understood properly to utilize this type of neural network for the tethered satellite problem. We present our findings on the learning characteristics including a learning rate versus momentum performance table.

  1. Deep neural networks for direct, featureless learning through observation: The case of two-dimensional spin models

    Science.gov (United States)

    Mills, Kyle; Tamblyn, Isaac

    2018-03-01

    We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4 ×4 Ising model. Using its success at this task, we motivate the study of the larger 8 ×8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration.

  2. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  3. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning.

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung

    2018-02-01

    Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  4. Deep learning with convolutional neural network in radiology.

    Science.gov (United States)

    Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu

    2018-04-01

    Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.

  5. Learning speaker-specific characteristics with a deep neural architecture.

    Science.gov (United States)

    Chen, Ke; Salman, Ahmad

    2011-11-01

    Speech signals convey various yet mixed information ranging from linguistic to speaker-specific information. However, most of acoustic representations characterize all different kinds of information as whole, which could hinder either a speech or a speaker recognition (SR) system from producing a better performance. In this paper, we propose a novel deep neural architecture (DNA) especially for learning speaker-specific characteristics from mel-frequency cepstral coefficients, an acoustic representation commonly used in both speech recognition and SR, which results in a speaker-specific overcomplete representation. In order to learn intrinsic speaker-specific characteristics, we come up with an objective function consisting of contrastive losses in terms of speaker similarity/dissimilarity and data reconstruction losses used as regularization to normalize the interference of non-speaker-related information. Moreover, we employ a hybrid learning strategy for learning parameters of the deep neural networks: i.e., local yet greedy layerwise unsupervised pretraining for initialization and global supervised learning for the ultimate discriminative goal. With four Linguistic Data Consortium (LDC) benchmarks and two non-English corpora, we demonstrate that our overcomplete representation is robust in characterizing various speakers, no matter whether their utterances have been used in training our DNA, and highly insensitive to text and languages spoken. Extensive comparative studies suggest that our approach yields favorite results in speaker verification and segmentation. Finally, we discuss several issues concerning our proposed approach.

  6. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung

    2018-02-01

    Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  7. Adaptive Learning Rule for Hardware-based Deep Neural Networks Using Electronic Synapse Devices

    OpenAIRE

    Lim, Suhwan; Bae, Jong-Ho; Eum, Jai-Ho; Lee, Sungtae; Kim, Chul-Heung; Kwon, Dongseok; Park, Byung-Gook; Lee, Jong-Ho

    2017-01-01

    In this paper, we propose a learning rule based on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network (HW-DNN) using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron net...

  8. An Innovative Teaching Method To Promote Active Learning: Team-Based Learning

    Science.gov (United States)

    Balasubramanian, R.

    2007-12-01

    Traditional teaching practice based on the textbook-whiteboard- lecture-homework-test paradigm is not very effective in helping students with diverse academic backgrounds achieve higher-order critical thinking skills such as analysis, synthesis, and evaluation. Consequently, there is a critical need for developing a new pedagogical approach to create a collaborative and interactive learning environment in which students with complementary academic backgrounds and learning skills can work together to enhance their learning outcomes. In this presentation, I will discuss an innovative teaching method ('Team-Based Learning (TBL)") which I recently developed at National University of Singapore to promote active learning among students in the environmental engineering program with learning abilities. I implemented this new educational activity in a graduate course. Student feedback indicates that this pedagogical approach is appealing to most students, and promotes active & interactive learning in class. Data will be presented to show that the innovative teaching method has contributed to improved student learning and achievement.

  9. Learning in Neural Networks: VLSI Implementation Strategies

    Science.gov (United States)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  10. Image Classification, Deep Learning and Convolutional Neural Networks : A Comparative Study of Machine Learning Frameworks

    OpenAIRE

    Airola, Rasmus; Hager, Kristoffer

    2017-01-01

    The use of machine learning and specifically neural networks is a growing trend in software development, and has grown immensely in the last couple of years in the light of an increasing need to handle big data and large information flows. Machine learning has a broad area of application, such as human-computer interaction, predicting stock prices, real-time translation, and self driving vehicles. Large companies such as Microsoft and Google have already implemented machine learning in some o...

  11. Continual and One-Shot Learning Through Neural Networks with Dynamic External Memory

    DEFF Research Database (Denmark)

    Lüders, Benno; Schläger, Mikkel; Korach, Aleksandra

    2017-01-01

    it easier to find unused memory location and therefor facilitates the evolution of continual learning networks. Our results suggest that augmenting evolving networks with an external memory component is not only a viable mechanism for adaptive behaviors in neuroevolution but also allows these networks...... a new task is learned. This paper takes a step in overcoming this limitation by building on the recently proposed Evolving Neural Turing Machine (ENTM) approach. In the ENTM, neural networks are augmented with an external memory component that they can write to and read from, which allows them to store...... associations quickly and over long periods of time. The results in this paper demonstrate that the ENTM is able to perform one-shot learning in reinforcement learning tasks without catastrophic forgetting of previously stored associations. Additionally, we introduce a new ENTM default jump mechanism that makes...

  12. Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.

    Science.gov (United States)

    Walter, Florian; Röhrbein, Florian; Knoll, Alois

    2015-12-01

    The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Arctigenin protects against neuronal hearing loss by promoting neural stem cell survival and differentiation.

    Science.gov (United States)

    Huang, Xinghua; Chen, Mo; Ding, Yan; Wang, Qin

    2017-03-01

    Neuronal hearing loss has become a prevalent health problem. This study focused on the function of arctigenin (ARC) in promoting survival and neuronal differentiation of mouse cochlear neural stem cells (NSCs), and its protection against gentamicin (GMC) induced neuronal hearing loss. Mouse cochlea was used to isolate NSCs, which were subsequently cultured in vitro. The effects of ARC on NSC survival, neurosphere formation, differentiation of NSCs, neurite outgrowth, and neural excitability in neuronal network in vitro were examined. Mechanotransduction ability demonstrated by intact cochlea, auditory brainstem response (ABR), and distortion product optoacoustic emissions (DPOAE) amplitude in mice were measured to evaluate effects of ARC on GMC-induced neuronal hearing loss. ARC increased survival, neurosphere formation, neuron differentiation of NSCs in mouse cochlear in vitro. ARC also promoted the outgrowth of neurites, as well as neural excitability of the NSC-differentiated neuron culture. Additionally, ARC rescued mechanotransduction capacity, restored the threshold shifts of ABR and DPOAE in our GMC ototoxicity murine model. This study supports the potential therapeutic role of ARC in promoting both NSCs proliferation and differentiation in vitro to functional neurons, thus supporting its protective function in the therapeutic treatment of neuropathic hearing loss in vivo. © 2017 Wiley Periodicals, Inc.

  14. Impaired neurogenesis, learning and memory and low seizure threshold associated with loss of neural precursor cell survivin

    Directory of Open Access Journals (Sweden)

    Eisch Amelia

    2010-01-01

    Full Text Available Abstract Background Survivin is a unique member of the inhibitor of apoptosis protein (IAP family in that it exhibits antiapoptotic properties and also promotes the cell cycle and mediates mitosis as a chromosome passenger protein. Survivin is highly expressed in neural precursor cells in the brain, yet its function there has not been elucidated. Results To examine the role of neural precursor cell survivin, we first showed that survivin is normally expressed in periventricular neurogenic regions in the embryo, becoming restricted postnatally to proliferating and migrating NPCs in the key neurogenic sites, the subventricular zone (SVZ and the subgranular zone (SGZ. We then used a conditional gene inactivation strategy to delete the survivin gene prenatally in those neurogenic regions. Lack of embryonic NPC survivin results in viable, fertile mice (SurvivinCamcre with reduced numbers of SVZ NPCs, absent rostral migratory stream, and olfactory bulb hypoplasia. The phenotype can be partially rescued, as intracerebroventricular gene delivery of survivin during embryonic development increases olfactory bulb neurogenesis, detected postnatally. SurvivinCamcre brains have fewer cortical inhibitory interneurons, contributing to enhanced sensitivity to seizures, and profound deficits in memory and learning. Conclusions The findings highlight the critical role that survivin plays during neural development, deficiencies of which dramatically impact on postnatal neural function.

  15. Biologically-inspired On-chip Learning in Pulsed Neural Networks

    DEFF Research Database (Denmark)

    Lehmann, Torsten; Woodburn, Robin

    1999-01-01

    Self-learning chips to implement many popular ANN (artificial neural network) algorithms are very difficult to design. We explain why this is so and say what lessons previous work teaches us in the design of self-learning systems. We offer a contribution to the "biologically-inspired" approach......, explaining what we mean by this term and providing an example of a robust, self-learning design that can solve simple classical-conditioning tasks, We give details of the design of individual circuits to perform component functions, which can then be combined into a network to solve the task. We argue...

  16. Learning and Generalisation in Neural Networks with Local Preprocessing

    OpenAIRE

    Kutsia, Merab

    2007-01-01

    We study learning and generalisation ability of a specific two-layer feed-forward neural network and compare its properties to that of a simple perceptron. The input patterns are mapped nonlinearly onto a hidden layer, much larger than the input layer, and this mapping is either fixed or may result from an unsupervised learning process. Such preprocessing of initially uncorrelated random patterns results in the correlated patterns in the hidden layer. The hidden-to-output mapping of the net...

  17. Psychedelics Promote Structural and Functional Neural Plasticity

    Directory of Open Access Journals (Sweden)

    Calvin Ly

    2018-06-01

    Full Text Available Summary: Atrophy of neurons in the prefrontal cortex (PFC plays a key role in the pathophysiology of depression and related disorders. The ability to promote both structural and functional plasticity in the PFC has been hypothesized to underlie the fast-acting antidepressant properties of the dissociative anesthetic ketamine. Here, we report that, like ketamine, serotonergic psychedelics are capable of robustly increasing neuritogenesis and/or spinogenesis both in vitro and in vivo. These changes in neuronal structure are accompanied by increased synapse number and function, as measured by fluorescence microscopy and electrophysiology. The structural changes induced by psychedelics appear to result from stimulation of the TrkB, mTOR, and 5-HT2A signaling pathways and could possibly explain the clinical effectiveness of these compounds. Our results underscore the therapeutic potential of psychedelics and, importantly, identify several lead scaffolds for medicinal chemistry efforts focused on developing plasticity-promoting compounds as safe, effective, and fast-acting treatments for depression and related disorders. : Ly et al. demonstrate that psychedelic compounds such as LSD, DMT, and DOI increase dendritic arbor complexity, promote dendritic spine growth, and stimulate synapse formation. These cellular effects are similar to those produced by the fast-acting antidepressant ketamine and highlight the potential of psychedelics for treating depression and related disorders. Keywords: neural plasticity, psychedelic, spinogenesis, synaptogenesis, depression, LSD, DMT, ketamine, noribogaine, MDMA

  18. Explaining neural signals in human visual cortex with an associative learning model.

    Science.gov (United States)

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  19. Using machine learning, neural networks and statistics to predict bankruptcy

    NARCIS (Netherlands)

    Pompe, P.P.M.; Feelders, A.J.; Feelders, A.J.

    1997-01-01

    Recent literature strongly suggests that machine learning approaches to classification outperform "classical" statistical methods. We make a comparison between the performance of linear discriminant analysis, classification trees, and neural networks in predicting corporate bankruptcy. Linear

  20. Media Multitasking and Cognitive, Psychological, Neural, and Learning Differences.

    Science.gov (United States)

    Uncapher, Melina R; Lin, Lin; Rosen, Larry D; Kirkorian, Heather L; Baron, Naomi S; Bailey, Kira; Cantor, Joanne; Strayer, David L; Parsons, Thomas D; Wagner, Anthony D

    2017-11-01

    American youth spend more time with media than any other waking activity: an average of 7.5 hours per day, every day. On average, 29% of that time is spent juggling multiple media streams simultaneously (ie, media multitasking). This phenomenon is not limited to American youth but is paralleled across the globe. Given that a large number of media multitaskers (MMTs) are children and young adults whose brains are still developing, there is great urgency to understand the neurocognitive profiles of MMTs. It is critical to understand the relation between the relevant cognitive domains and underlying neural structure and function. Of equal importance is understanding the types of information processing that are necessary in 21st century learning environments. The present review surveys the growing body of evidence demonstrating that heavy MMTs show differences in cognition (eg, poorer memory), psychosocial behavior (eg, increased impulsivity), and neural structure (eg, reduced volume in anterior cingulate cortex). Furthermore, research indicates that multitasking with media during learning (in class or at home) can negatively affect academic outcomes. Until the direction of causality is understood (whether media multitasking causes such behavioral and neural differences or whether individuals with such differences tend to multitask with media more often), the data suggest that engagement with concurrent media streams should be thoughtfully considered. Findings from such research promise to inform policy and practice on an increasingly urgent societal issue while significantly advancing our understanding of the intersections between cognitive, psychosocial, neural, and academic factors. Copyright © 2017 by the American Academy of Pediatrics.

  1. A neural learning classifier system with self-adaptive constructivism for mobile robot control.

    Science.gov (United States)

    Hurst, Jacob; Bull, Larry

    2006-01-01

    For artificial entities to achieve true autonomy and display complex lifelike behavior, they will need to exploit appropriate adaptable learning algorithms. In this context adaptability implies flexibility guided by the environment at any given time and an open-ended ability to learn appropriate behaviors. This article examines the use of constructivism-inspired mechanisms within a neural learning classifier system architecture that exploits parameter self-adaptation as an approach to realize such behavior. The system uses a rule structure in which each rule is represented by an artificial neural network. It is shown that appropriate internal rule complexity emerges during learning at a rate controlled by the learner and that the structure indicates underlying features of the task. Results are presented in simulated mazes before moving to a mobile robot platform.

  2. The conditions that promote fear learning: prediction error and Pavlovian fear conditioning.

    Science.gov (United States)

    Li, Susan Shi Yuan; McNally, Gavan P

    2014-02-01

    A key insight of associative learning theory is that learning depends on the actions of prediction error: a discrepancy between the actual and expected outcomes of a conditioning trial. When positive, such error causes increments in associative strength and, when negative, such error causes decrements in associative strength. Prediction error can act directly on fear learning by determining the effectiveness of the aversive unconditioned stimulus or indirectly by determining the effectiveness, or associability, of the conditioned stimulus. Evidence from a variety of experimental preparations in human and non-human animals suggest that discrete neural circuits code for these actions of prediction error during fear learning. Here we review the circuits and brain regions contributing to the neural coding of prediction error during fear learning and highlight areas of research (safety learning, extinction, and reconsolidation) that may profit from this approach to understanding learning. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  3. Supervised neural network modeling: an empirical investigation into learning from imbalanced data with labeling errors.

    Science.gov (United States)

    Khoshgoftaar, Taghi M; Van Hulse, Jason; Napolitano, Amri

    2010-05-01

    Neural network algorithms such as multilayer perceptrons (MLPs) and radial basis function networks (RBFNets) have been used to construct learners which exhibit strong predictive performance. Two data related issues that can have a detrimental impact on supervised learning initiatives are class imbalance and labeling errors (or class noise). Imbalanced data can make it more difficult for the neural network learning algorithms to distinguish between examples of the various classes, and class noise can lead to the formulation of incorrect hypotheses. Both class imbalance and labeling errors are pervasive problems encountered in a wide variety of application domains. Many studies have been performed to investigate these problems in isolation, but few have focused on their combined effects. This study presents a comprehensive empirical investigation using neural network algorithms to learn from imbalanced data with labeling errors. In particular, the first component of our study investigates the impact of class noise and class imbalance on two common neural network learning algorithms, while the second component considers the ability of data sampling (which is commonly used to address the issue of class imbalance) to improve their performances. Our results, for which over two million models were trained and evaluated, show that conclusions drawn using the more commonly studied C4.5 classifier may not apply when using neural networks.

  4. Breast Cancer Diagnosis using Artificial Neural Networks with Extreme Learning Techniques

    OpenAIRE

    Chandra Prasetyo Utomo; Aan Kardiana; Rika Yuliwulandari

    2014-01-01

    Breast cancer is the second cause of dead among women. Early detection followed by appropriate cancer treatment can reduce the deadly risk. Medical professionals can make mistakes while identifying a disease. The help of technology such as data mining and machine learning can substantially improve the diagnosis accuracy. Artificial Neural Networks (ANN) has been widely used in intelligent breast cancer diagnosis. However, the standard Gradient-Based Back Propagation Artificial Neural Networks...

  5. Using Deep Learning Neural Networks To Find Best Performing Audience Segments

    Directory of Open Access Journals (Sweden)

    Anup Badhe

    2015-08-01

    Full Text Available Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning neural networks have been used in machine learning to use multiple processing layers to interpret large datasets with multiple dimensions to come up with a high-level characterization of the data. During a request for an advertisement and subsequently serving of the advertisement on the mobile device there are many trackers that are fired collecting a lot of data points. If the user likes the advertisement and clicks on it another set of trackers give additional information resulting from the click. This information is aggregated by the ad server and shown in its reporting console. The same information can form the basis of machine learning by feeding this information to a deep learning neural network to come up with audiences that can be targeted based on the product that is advertised.

  6. Self-learning Monte Carlo with deep neural networks

    Science.gov (United States)

    Shen, Huitao; Liu, Junwei; Fu, Liang

    2018-05-01

    The self-learning Monte Carlo (SLMC) method is a general algorithm to speedup MC simulations. Its efficiency has been demonstrated in various systems by introducing an effective model to propose global moves in the configuration space. In this paper, we show that deep neural networks can be naturally incorporated into SLMC, and without any prior knowledge can learn the original model accurately and efficiently. Demonstrated in quantum impurity models, we reduce the complexity for a local update from O (β2) in Hirsch-Fye algorithm to O (β lnβ ) , which is a significant speedup especially for systems at low temperatures.

  7. Minimal-Learning-Parameter Technique Based Adaptive Neural Sliding Mode Control of MEMS Gyroscope

    Directory of Open Access Journals (Sweden)

    Bin Xu

    2017-01-01

    Full Text Available This paper investigates an adaptive neural sliding mode controller for MEMS gyroscopes with minimal-learning-parameter technique. Considering the system uncertainty in dynamics, neural network is employed for approximation. Minimal-learning-parameter technique is constructed to decrease the number of update parameters, and in this way the computation burden is greatly reduced. Sliding mode control is designed to cancel the effect of time-varying disturbance. The closed-loop stability analysis is established via Lyapunov approach. Simulation results are presented to demonstrate the effectiveness of the method.

  8. Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model

    Science.gov (United States)

    Panda, Priyadarshini; Srinivasa, Narayan

    2018-01-01

    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models. PMID:29551962

  9. Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network.

    Science.gov (United States)

    Del Papa, Bruno; Priesemann, Viola; Triesch, Jochen

    2017-01-01

    Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions - matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model's performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN's spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.

  10. Promoting Learning: What Universities Don't Do

    Science.gov (United States)

    Martin, Brian

    2018-01-01

    Universities seek to promote student learning, but assessment and credentials can undermine students' intrinsic motivation to learn. Findings from research on how people learn, mindsets, expert performance and good health are seldom incorporated into the way universities organise learning experiences.

  11. Differential theory of learning for efficient neural network pattern recognition

    Science.gov (United States)

    Hampshire, John B., II; Vijaya Kumar, Bhagavatula

    1993-09-01

    We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generate well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.

  12. Outsmarting neural networks: an alternative paradigm for machine learning

    Energy Technology Data Exchange (ETDEWEB)

    Protopopescu, V.; Rao, N.S.V.

    1996-10-01

    We address three problems in machine learning, namely: (i) function learning, (ii) regression estimation, and (iii) sensor fusion, in the Probably and Approximately Correct (PAC) framework. We show that, under certain conditions, one can reduce the three problems above to the regression estimation. The latter is usually tackled with artificial neural networks (ANNs) that satisfy the PAC criteria, but have high computational complexity. We propose several computationally efficient PAC alternatives to ANNs to solve the regression estimation. Thereby we also provide efficient PAC solutions to the function learning and sensor fusion problems. The approach is based on cross-fertilizing concepts and methods from statistical estimation, nonlinear algorithms, and the theory of computational complexity, and is designed as part of a new, coherent paradigm for machine learning.

  13. Design and FPGA-implementation of multilayer neural networks with on-chip learning

    International Nuclear Information System (INIS)

    Haggag, S.S.M.Y

    2008-01-01

    Artificial Neural Networks (ANN) is used in many applications in the industry because of their parallel structure, high speed, and their ability to give easy solution to complicated problems. For example identifying the orange and apple in the sorting machine with neural network is easier than using image processing techniques to do the same thing. There are different software for designing, training, and testing the ANN, but in order to use the ANN in the industry, it should be implemented on hardware outside the computer. Neural networks are artificial systems inspired on the brain's cognitive behavior, which can learn tasks with some degree of complexity, such as signal processing, diagnosis, robotics, image processing, and pattern recognition. Many applications demand a high computing power and the traditional software implementation are not sufficient.This thesis presents design and FPGA implementation of Multilayer Neural Networks with On-chip learning in re-configurable hardware. Hardware implementation of neural network algorithm is very interesting due their high performance and they can easily be made parallel. The architecture proposed herein takes advantage of distinct data paths for the forward and backward propagation stages and a pipelined adaptation of the on- line backpropagation algorithm to significantly improve the performance of the learning phase. The architecture is easily scalable and able to cope with arbitrary network sizes with the same hardware. The implementation is targeted diagnosis of the Research Reactor accidents to avoid the risk of occurrence of a nuclear accident. The proposed designed circuits are implemented using Xilinx FPGA Chip XC40150xv and occupied 73% of Chip CLBs. It achieved 10.8 μs to take decision in the forward propagation compared with current software implemented of RPS which take 24 ms. The results show that the proposed architecture leads to significant speed up comparing to high end software solutions. On

  14. Neural correlates of face gender discrimination learning.

    Science.gov (United States)

    Su, Junzhu; Tan, Qingleng; Fang, Fang

    2013-04-01

    Using combined psychophysics and event-related potentials (ERPs), we investigated the effect of perceptual learning on face gender discrimination and probe the neural correlates of the learning effect. Human subjects were trained to perform a gender discrimination task with male or female faces. Before and after training, they were tested with the trained faces and other faces with the same and opposite genders. ERPs responding to these faces were recorded. Psychophysical results showed that training significantly improved subjects' discrimination performance and the improvement was specific to the trained gender, as well as to the trained identities. The training effect indicates that learning occurs at two levels-the category level (gender) and the exemplar level (identity). ERP analyses showed that the gender and identity learning was associated with the N170 latency reduction at the left occipital-temporal area and the N170 amplitude reduction at the right occipital-temporal area, respectively. These findings provide evidence for the facilitation model and the sharpening model on neuronal plasticity from visual experience, suggesting a faster processing speed and a sparser representation of face induced by perceptual learning.

  15. Neural Basis of Reinforcement Learning and Decision Making

    Science.gov (United States)

    Lee, Daeyeol; Seo, Hyojung; Jung, Min Whan

    2012-01-01

    Reinforcement learning is an adaptive process in which an animal utilizes its previous experience to improve the outcomes of future choices. Computational theories of reinforcement learning play a central role in the newly emerging areas of neuroeconomics and decision neuroscience. In this framework, actions are chosen according to their value functions, which describe how much future reward is expected from each action. Value functions can be adjusted not only through reward and penalty, but also by the animal’s knowledge of its current environment. Studies have revealed that a large proportion of the brain is involved in representing and updating value functions and using them to choose an action. However, how the nature of a behavioral task affects the neural mechanisms of reinforcement learning remains incompletely understood. Future studies should uncover the principles by which different computational elements of reinforcement learning are dynamically coordinated across the entire brain. PMID:22462543

  16. Neural correlates of threat perception: neural equivalence of conspecific and heterospecific mobbing calls is learned.

    Directory of Open Access Journals (Sweden)

    Marc T Avey

    Full Text Available Songbird auditory areas (i.e., CMM and NCM are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators. Mobbing calls produced in response to smaller, higher-threat predators contain more "D" notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned.

  17. Neural Correlates of Threat Perception: Neural Equivalence of Conspecific and Heterospecific Mobbing Calls Is Learned

    Science.gov (United States)

    Avey, Marc T.; Hoeschele, Marisa; Moscicki, Michele K.; Bloomfield, Laurie L.; Sturdy, Christopher B.

    2011-01-01

    Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise [1]–[2]. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators [3]. Mobbing calls produced in response to smaller, higher-threat predators contain more “D” notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators [4]. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned. PMID:21909363

  18. Neural correlates of threat perception: neural equivalence of conspecific and heterospecific mobbing calls is learned.

    Science.gov (United States)

    Avey, Marc T; Hoeschele, Marisa; Moscicki, Michele K; Bloomfield, Laurie L; Sturdy, Christopher B

    2011-01-01

    Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators. Mobbing calls produced in response to smaller, higher-threat predators contain more "D" notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned.

  19. Single-Iteration Learning Algorithm for Feed-Forward Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Cogswell, R.; Protopopescu, V.

    1999-07-31

    A new methodology for neural learning is presented, whereby only a single iteration is required to train a feed-forward network with near-optimal results. To this aim, a virtual input layer is added to the multi-layer architecture. The virtual input layer is connected to the nominal input layer by a specird nonlinear transfer function, and to the fwst hidden layer by regular (linear) synapses. A sequence of alternating direction singular vrdue decompositions is then used to determine precisely the inter-layer synaptic weights. This algorithm exploits the known separability of the linear (inter-layer propagation) and nonlinear (neuron activation) aspects of information &ansfer within a neural network.

  20. On the relationships between generative encodings, regularity, and learning abilities when evolving plastic artificial neural networks.

    Directory of Open Access Journals (Sweden)

    Paul Tonelli

    Full Text Available A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1 the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2 synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT. Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1 in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2 whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities.

  1. Learning behavior and temporary minima of two-layer neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the occurrence of temporary minima during training of a single-output, two-layer neural network, with learning according to the back-propagation algorithm. A new vector decomposition method is introduced, which simplifies the mathematical analysis of

  2. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.

    Science.gov (United States)

    Tanabe, Hiroki C; Honda, Manabu; Sadato, Norihiro

    2005-07-06

    To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

  3. LEARNING ALGORITHM EFFECT ON MULTILAYER FEED FORWARD ARTIFICIAL NEURAL NETWORK PERFORMANCE IN IMAGE CODING

    Directory of Open Access Journals (Sweden)

    OMER MAHMOUD

    2007-08-01

    Full Text Available One of the essential factors that affect the performance of Artificial Neural Networks is the learning algorithm. The performance of Multilayer Feed Forward Artificial Neural Network performance in image compression using different learning algorithms is examined in this paper. Based on Gradient Descent, Conjugate Gradient, Quasi-Newton techniques three different error back propagation algorithms have been developed for use in training two types of neural networks, a single hidden layer network and three hidden layers network. The essence of this study is to investigate the most efficient and effective training methods for use in image compression and its subsequent applications. The obtained results show that the Quasi-Newton based algorithm has better performance as compared to the other two algorithms.

  4. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    Science.gov (United States)

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  5. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    Science.gov (United States)

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  6. Promoting learning transfer in preceptor preparation.

    Science.gov (United States)

    Finn, Frances L; Chesser-Smyth, Patricia

    2013-01-01

    An understanding of learning transfer principles is essential for professional development educators and managers to ensure that new skills and knowledge learned are applied to practice. This article presents a collaborative project involving the planning, design, and implementation of a preceptor training program for registered nurses. The theories and principles discussed in this article could be applied to a number of different settings and contexts in health care to promote learning transfer in professional development activities.

  7. Neural mechanisms of human perceptual learning: electrophysiological evidence for a two-stage process.

    Science.gov (United States)

    Hamamé, Carlos M; Cosmelli, Diego; Henriquez, Rodrigo; Aboitiz, Francisco

    2011-04-26

    Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed. We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30-60 Hz) and alpha (8-14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing. We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.

  8. Humor: a pedagogical tool to promote learning.

    Science.gov (United States)

    Chabeli, M

    2008-09-01

    It has become critical that learners are exposed to varied methods of teaching and assessment that will promote critical thinking of learners. Humor creates a relaxed atmosphere where learning can be enhanced and appreciated. When learners are relaxed, thinking becomes eminent. Authoritative and tense environment hinders thinking. This paper seeks to explore the perceptions of nurse teacher learners regarding the use of humor as a pedagogical tool to promote learning. A qualitative, exploratory, descriptive and contextual research design was employed (Burns & Grove, 2001:61; Mouton, 1996:103). 130 naive sketches were collected from nurse teacher learners who volunteered to take part in the study (Giorgi in_Omery, 1983:52) Follow up interviews were conducted to verify the findings. A qualitative, open-coding method of content analysis was done Tesch (in Creswell, 1994:155). Measures to ensure trustworthiness of the study were taken in accordance with the protocol of (Lincoln & Guba, 1985:290-326). The findings of the study will assist the nurse educators to create a positive, affective, psychological and social learning environment through the use of humor in a positive manner. Nurse educators will appreciate the fact that integration of humor to the learning content will promote the learners' critical thinking and emotional intelligence. Negative humor has a negative impact on learning. Learner nurses who become critical thinkers will be able to be analytical and solve problems amicably in practice.

  9. Learning representations for the early detection of sepsis with deep neural networks.

    Science.gov (United States)

    Kam, Hye Jin; Kim, Ha Young

    2017-10-01

    Sepsis is one of the leading causes of death in intensive care unit patients. Early detection of sepsis is vital because mortality increases as the sepsis stage worsens. This study aimed to develop detection models for the early stage of sepsis using deep learning methodologies, and to compare the feasibility and performance of the new deep learning methodology with those of the regression method with conventional temporal feature extraction. Study group selection adhered to the InSight model. The results of the deep learning-based models and the InSight model were compared. With deep feedforward networks, the area under the ROC curve (AUC) of the models were 0.887 and 0.915 for the InSight and the new feature sets, respectively. For the model with the combined feature set, the AUC was the same as that of the basic feature set (0.915). For the long short-term memory model, only the basic feature set was applied and the AUC improved to 0.929 compared with the existing 0.887 of the InSight model. The contributions of this paper can be summarized in three ways: (i) improved performance without feature extraction using domain knowledge, (ii) verification of feature extraction capability of deep neural networks through comparison with reference features, and (iii) improved performance with feedforward neural networks using long short-term memory, a neural network architecture that can learn sequential patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Culture in the mind's mirror: how anthropology and neuroscience can inform a model of the neural substrate for cultural imitative learning.

    Science.gov (United States)

    Losin, Elizabeth A Reynolds; Dapretto, Mirella; Iacoboni, Marco

    2009-01-01

    Cultural neuroscience, the study of how cultural experience shapes the brain, is an emerging subdiscipline in the neurosciences. Yet, a foundational question to the study of culture and the brain remains neglected by neuroscientific inquiry: "How does cultural information get into the brain in the first place?" Fortunately, the tools needed to explore the neural architecture of cultural learning - anthropological theories and cognitive neuroscience methodologies - already exist; they are merely separated by disciplinary boundaries. Here we review anthropological theories of cultural learning derived from fieldwork and modeling; since cultural learning theory suggests that sophisticated imitation abilities are at the core of human cultural learning, we focus our review on cultural imitative learning. Accordingly we proceed to discuss the neural underpinnings of imitation and other mechanisms important for cultural learning: learning biases, mental state attribution, and reinforcement learning. Using cultural neuroscience theory and cognitive neuroscience research as our guides, we then propose a preliminary model of the neural architecture of cultural learning. Finally, we discuss future studies needed to test this model and fully explore and explain the neural underpinnings of cultural imitative learning.

  11. Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System

    Science.gov (United States)

    Williams-Hayes, Peggy S.

    2004-01-01

    The NASA F-15 Intelligent Flight Control System project team developed a series of flight control concepts designed to demonstrate neural network-based adaptive controller benefits, with the objective to develop and flight-test control systems using neural network technology to optimize aircraft performance under nominal conditions and stabilize the aircraft under failure conditions. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to baseline aerodynamic derivatives in flight. This open-loop flight test set was performed in preparation for a future phase in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed - pitch frequency sweep and automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. Flight data examination shows that addition of flight-identified aerodynamic derivative increments into the simulation improved aircraft pitch handling qualities.

  12. Picasso: A Modular Framework for Visualizing the Learning Process of Neural Network Image Classifiers

    Directory of Open Access Journals (Sweden)

    Ryan Henderson

    2017-09-01

    Full Text Available Picasso is a free open-source (Eclipse Public License web application written in Python for rendering standard visualizations useful for analyzing convolutional neural networks. Picasso ships with occlusion maps and saliency maps, two visualizations which help reveal issues that evaluation metrics like loss and accuracy might hide: for example, learning a proxy classification task. Picasso works with the Tensorflow deep learning framework, and Keras (when the model can be loaded into the Tensorflow backend. Picasso can be used with minimal configuration by deep learning researchers and engineers alike across various neural network architectures. Adding new visualizations is simple: the user can specify their visualization code and HTML template separately from the application code.

  13. A stochastic learning algorithm for layered neural networks

    International Nuclear Information System (INIS)

    Bartlett, E.B.; Uhrig, R.E.

    1992-01-01

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given

  14. Assessing Preschool Teachers' Practices to Promote Self-Regulated Learning

    Science.gov (United States)

    Adagideli, Fahretdin Hasan; Saraç, Seda; Ader, Engin

    2015-01-01

    Recent research reveals that in preschool years, through pedagogical interventions, preschool teachers can and should promote self-regulated learning. The main aim of this study is to develop a self-report instrument to assess preschool teachers' practices to promote self-regulated learning. A pool of 50 items was recruited through literature…

  15. Deep learning for steganalysis via convolutional neural networks

    Science.gov (United States)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  16. Motor sequence learning-induced neural efficiency in functional brain connectivity.

    Science.gov (United States)

    Karim, Helmet T; Huppert, Theodore J; Erickson, Kirk I; Wollam, Mariegold E; Sparto, Patrick J; Sejdić, Ervin; VanSwearingen, Jessie M

    2017-02-15

    Previous studies have shown the functional neural circuitry differences before and after an explicitly learned motor sequence task, but have not assessed these changes during the process of motor skill learning. Functional magnetic resonance imaging activity was measured while participants (n=13) were asked to tap their fingers to visually presented sequences in blocks that were either the same sequence repeated (learning block) or random sequences (control block). Motor learning was associated with a decrease in brain activity during learning compared to control. Lower brain activation was noted in the posterior parietal association area and bilateral thalamus during the later periods of learning (not during the control). Compared to the control condition, we found the task-related motor learning was associated with decreased connectivity between the putamen and left inferior frontal gyrus and left middle cingulate brain regions. Motor learning was associated with changes in network activity, spatial extent, and connectivity. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    Science.gov (United States)

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  18. Towards Ways to Promote Interaction in Digital Learning Spaces

    OpenAIRE

    Olsson , Hanna ,

    2012-01-01

    Part 7: Doctoral Student Papers; International audience; Social learning is dependent on social interactions. I am exploring ways to promote interaction in Digital Learning Spaces. As theoretical framework I use the types of interaction between learner, instructor and content. That learners feel isolated and lonely in DLSs is a problem which comes at high cost for social learning. My aim is to promote social interaction by offering the edentity: a system for making participants visible to eac...

  19. Neural Correlates of Morphology Acquisition through a Statistical Learning Paradigm.

    Science.gov (United States)

    Sandoval, Michelle; Patterson, Dianne; Dai, Huanping; Vance, Christopher J; Plante, Elena

    2017-01-01

    The neural basis of statistical learning as it occurs over time was explored with stimuli drawn from a natural language (Russian nouns). The input reflected the "rules" for marking categories of gendered nouns, without making participants explicitly aware of the nature of what they were to learn. Participants were scanned while listening to a series of gender-marked nouns during four sequential scans, and were tested for their learning immediately after each scan. Although participants were not told the nature of the learning task, they exhibited learning after their initial exposure to the stimuli. Independent component analysis of the brain data revealed five task-related sub-networks. Unlike prior statistical learning studies of word segmentation, this morphological learning task robustly activated the inferior frontal gyrus during the learning period. This region was represented in multiple independent components, suggesting it functions as a network hub for this type of learning. Moreover, the results suggest that subnetworks activated by statistical learning are driven by the nature of the input, rather than reflecting a general statistical learning system.

  20. Inflammatory Th17 cells promote depression-like behavior in mice

    Science.gov (United States)

    Beurel, Eléonore; Harrington, Laurie E.; Jope, Richard S.

    2012-01-01

    Background Recognition of substantial immune-neural interactions is revising dogmas about their insular actions and revealing that immune-neural interactions can substantially impact CNS functions. The inflammatory cytokine interleukin-6 promotes susceptibility to depression and drives production of inflammatory T helper 17 (Th17) T cells, raising the hypothesis that in mouse models Th17 cells promote susceptibility to depression-like behaviors. Methods Behavioral characteristics were measured in male mice administered Th17 cells, CD4+ cells, or vehicle, and in RORγT+/GFP mice or male mice treated with RORγT inhibitor or anti-IL-17A antibodies. Results Mouse brain Th17 cells were elevated by learned helplessness and chronic restraint stress, two common depression-like models. Th17 cell administration promoted learned helplessness in 89% of mice in a paradigm where no vehicle-treated mice developed learned helplessness, and impaired novelty suppressed feeding and social interaction behaviors. Mice deficient in the RORγT transcription factor necessary for Th17 cell production exhibited resistance to learned helplessness, identifying modulation of RORγT as a potential intervention. Treatment with the RORγT inhibitor SR1001, or anti-IL-17A antibodies to abrogate Th17 cell function, reduced Th17-dependent learned helplessness. Conclusions These findings indicate that Th17 cells are increased in the brain during depression-like states, promote depression-like behaviors in mice, and specifically inhibiting the production or function of Th17 cells reduces vulnerability to depression-like behavior, suggesting antidepressant effects may be attained by targeting Th17 cells. PMID:23174342

  1. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language.

    Directory of Open Access Journals (Sweden)

    Bruno Golosio

    Full Text Available Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.

  3. Management Strategies for Promoting Teacher Collective Learning

    Science.gov (United States)

    Cheng, Eric C. K.

    2011-01-01

    This paper aims to validate a theoretical model for developing teacher collective learning by using a quasi-experimental design, and explores the management strategies that would provide a school administrator practical steps to effectively promote collective learning in the school organization. Twenty aided secondary schools in Hong Kong were…

  4. PROMOTING MEANINGFUL LEARNING THROUGH CREATE-SHARE-COLLABORATE

    OpenAIRE

    Sailin, Siti Nazuar; Mahmor, Noor Aida

    2017-01-01

    Students in this 21st century are required to acquire these 4C skills: Critical thinking, Communication, Collaboration and Creativity. These skills can be integrated in the teaching and learning through innovative teaching that promotes active and meaningful learning. One way of integrating these skills is through collaborative knowledge creation and sharing. This paper providesan example of meaningful teaching and learning activities designed within the Create-Share-Collaborate instructional...

  5. Dissociable neural representations of reinforcement and belief prediction errors underlie strategic learning.

    Science.gov (United States)

    Zhu, Lusha; Mathewson, Kyle E; Hsu, Ming

    2012-01-31

    Decision-making in the presence of other competitive intelligent agents is fundamental for social and economic behavior. Such decisions require agents to behave strategically, where in addition to learning about the rewards and punishments available in the environment, they also need to anticipate and respond to actions of others competing for the same rewards. However, whereas we know much about strategic learning at both theoretical and behavioral levels, we know relatively little about the underlying neural mechanisms. Here, we show using a multi-strategy competitive learning paradigm that strategic choices can be characterized by extending the reinforcement learning (RL) framework to incorporate agents' beliefs about the actions of their opponents. Furthermore, using this characterization to generate putative internal values, we used model-based functional magnetic resonance imaging to investigate neural computations underlying strategic learning. We found that the distinct notions of prediction errors derived from our computational model are processed in a partially overlapping but distinct set of brain regions. Specifically, we found that the RL prediction error was correlated with activity in the ventral striatum. In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning. These results suggest a model of strategic behavior where learning arises from interaction of dissociable reinforcement and belief-based inputs.

  6. Growing adaptive machines combining development and learning in artificial neural networks

    CERN Document Server

    Bredeche, Nicolas; Doursat, René

    2014-01-01

    The pursuit of artificial intelligence has been a highly active domain of research for decades, yielding exciting scientific insights and productive new technologies. In terms of generating intelligence, however, this pursuit has yielded only limited success. This book explores the hypothesis that adaptive growth is a means of moving forward. By emulating the biological process of development, we can incorporate desirable characteristics of natural neural systems into engineered designs, and thus move closer towards the creation of brain-like systems. The particular focus is on how to design artificial neural networks for engineering tasks. The book consists of contributions from 18 researchers, ranging from detailed reviews of recent domains by senior scientists, to exciting new contributions representing the state of the art in machine learning research. The book begins with broad overviews of artificial neurogenesis and bio-inspired machine learning, suitable both as an introduction to the domains and as a...

  7. Learning-Related Changes in Adolescents' Neural Networks during Hypothesis-Generating and Hypothesis-Understanding Training

    Science.gov (United States)

    Lee, Jun-Ki; Kwon, Yongju

    2012-01-01

    Fourteen science high school students participated in this study, which investigated neural-network plasticity associated with hypothesis-generating and hypothesis-understanding in learning. The students were divided into two groups and participated in either hypothesis-generating or hypothesis-understanding type learning programs, which were…

  8. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.

    Science.gov (United States)

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

  9. Asymmetric Variate Generation via a Parameterless Dual Neural Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Simone Fiori

    2008-01-01

    Full Text Available In a previous work (S. Fiori, 2006, we proposed a random number generator based on a tunable non-linear neural system, whose learning rule is designed on the basis of a cardinal equation from statistics and whose implementation is based on look-up tables (LUTs. The aim of the present manuscript is to improve the above-mentioned random number generation method by changing the learning principle, while retaining the efficient LUT-based implementation. The new method proposed here proves easier to implement and relaxes some previous limitations.

  10. Deciphering the Role of Sulfonated Unit in Heparin-Mimicking Polymer to Promote Neural Differentiation of Embryonic Stem Cells.

    Science.gov (United States)

    Lei, Jiehua; Yuan, Yuqi; Lyu, Zhonglin; Wang, Mengmeng; Liu, Qi; Wang, Hongwei; Yuan, Lin; Chen, Hong

    2017-08-30

    Glycosaminoglycans (GAGs), especially heparin and heparan sulfate (HS), hold great potential for inducing the neural differentiation of embryonic stem cells (ESCs) and have brought new hope for the treatment of neurological diseases. However, the disadvantages of natural heparin/HS, such as difficulty in isolating them with a sufficient amount, highly heterogeneous structure, and the risk of immune responses, have limited their further therapeutic applications. Thus, there is a great demand for stable, controllable, and well-defined synthetic alternatives of heparin/HS with more effective biological functions. In this study, based upon a previously proposed unit-recombination strategy, several heparin-mimicking polymers were synthesized by integrating glucosamine-like 2-methacrylamido glucopyranose monomers (MAG) with three sulfonated units in different structural forms, and their effects on cell proliferation, the pluripotency, and the differentiation of ESCs were carefully studied. The results showed that all the copolymers had good cytocompatibility and displayed much better bioactivity in promoting the neural differentiation of ESCs as compared to natural heparin; copolymers with different sulfonated units exhibited different levels of promoting ability; among them, copolymer with 3-sulfopropyl acrylate (SPA) as a sulfonated unit was the most potent in promoting the neural differentiation of ESCs; the promoting effect is dependent on the molecular weight and concentration of P(MAG-co-SPA), with the highest levels occurring at the intermediate molecular weight and concentration. These results clearly demonstrated that the sulfonated unit in the copolymers played an important role in determining the promoting effect on ESCs' neural differentiation; SPA was identified as the most potent sulfonated unit for copolymer with the strongest promoting ability. The possible reason for sulfonated unit structure as a vital factor influencing the ability of the copolymers

  11. Optical implementation of neural learning algorithms based on cross-gain modulation in a semiconductor optical amplifier

    Science.gov (United States)

    Li, Qiang; Wang, Zhi; Le, Yansi; Sun, Chonghui; Song, Xiaojia; Wu, Chongqing

    2016-10-01

    Neuromorphic engineering has a wide range of applications in the fields of machine learning, pattern recognition, adaptive control, etc. Photonics, characterized by its high speed, wide bandwidth, low power consumption and massive parallelism, is an ideal way to realize ultrafast spiking neural networks (SNNs). Synaptic plasticity is believed to be critical for learning, memory and development in neural circuits. Experimental results have shown that changes of synapse are highly dependent on the relative timing of pre- and postsynaptic spikes. Synaptic plasticity in which presynaptic spikes preceding postsynaptic spikes results in strengthening, while the opposite timing results in weakening is called antisymmetric spike-timing-dependent plasticity (STDP) learning rule. And synaptic plasticity has the opposite effect under the same conditions is called antisymmetric anti-STDP learning rule. We proposed and experimentally demonstrated an optical implementation of neural learning algorithms, which can achieve both of antisymmetric STDP and anti-STDP learning rule, based on the cross-gain modulation (XGM) within a single semiconductor optical amplifier (SOA). The weight and height of the potentitation and depression window can be controlled by adjusting the injection current of the SOA, to mimic the biological antisymmetric STDP and anti-STDP learning rule more realistically. As the injection current increases, the width of depression and potentitation window decreases and height increases, due to the decreasing of recovery time and increasing of gain under a stronger injection current. Based on the demonstrated optical STDP circuit, ultrafast learning in optical SNNs can be realized.

  12. Ventral Tegmental Area and Substantia Nigra Neural Correlates of Spatial Learning

    Science.gov (United States)

    Martig, Adria K.; Mizumori, Sheri J. Y.

    2011-01-01

    The ventral tegmental area (VTA) and substantia nigra pars compacta (SNc) may provide modulatory signals that, respectively, influence hippocampal (HPC)- and striatal-dependent memory. Electrophysiological studies investigating neural correlates of learning and memory of dopamine (DA) neurons during classical conditioning tasks have found DA…

  13. The neural coding of feedback learning across child and adolescent development

    NARCIS (Netherlands)

    Peters, S.; Braams, B.R.; Raijmakers, M.E.J.; Koolschijn, P.C.M.P.; Crone, E.A.

    2014-01-01

    The ability to learn from environmental cues is an important contributor to successful performance in a variety of settings, including school. Despite the progress in unraveling the neural correlates of cognitive control in childhood and adolescence, relatively little is known about how these brain

  14. Neural changes associated to procedural learning and automatization process in Developmental Coordination Disorder and/or Developmental Dyslexia.

    Science.gov (United States)

    Biotteau, Maëlle; Péran, Patrice; Vayssière, Nathalie; Tallet, Jessica; Albaret, Jean-Michel; Chaix, Yves

    2017-03-01

    Recent theories hypothesize that procedural learning may support the frequent overlap between neurodevelopmental disorders. The neural circuitry supporting procedural learning includes, among others, cortico-cerebellar and cortico-striatal loops. Alteration of these loops may account for the frequent comorbidity between Developmental Coordination Disorder (DCD) and Developmental Dyslexia (DD). The aim of our study was to investigate cerebral changes due to the learning and automatization of a sequence learning task in children with DD, or DCD, or both disorders. fMRI on 48 children (aged 8-12) with DD, DCD or DD + DCD was used to explore their brain activity during procedural tasks, performed either after two weeks of training or in the early stage of learning. Firstly, our results indicate that all children were able to perform the task with the same level of automaticity, but recruit different brain processes to achieve the same performance. Secondly, our fMRI results do not appear to confirm Nicolson and Fawcett's model. The neural correlates recruited for procedural learning by the DD and the comorbid groups are very close, while the DCD group presents distinct characteristics. This provide a promising direction on the neural mechanisms associated with procedural learning in neurodevelopmental disorders and for understanding comorbidity. Published by Elsevier Ltd.

  15. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    Science.gov (United States)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  16. Dynamic Learning from Adaptive Neural Control of Uncertain Robots with Guaranteed Full-State Tracking Precision

    Directory of Open Access Journals (Sweden)

    Min Wang

    2017-01-01

    Full Text Available A dynamic learning method is developed for an uncertain n-link robot with unknown system dynamics, achieving predefined performance attributes on the link angular position and velocity tracking errors. For a known nonsingular initial robotic condition, performance functions and unconstrained transformation errors are employed to prevent the violation of the full-state tracking error constraints. By combining two independent Lyapunov functions and radial basis function (RBF neural network (NN approximator, a novel and simple adaptive neural control scheme is proposed for the dynamics of the unconstrained transformation errors, which guarantees uniformly ultimate boundedness of all the signals in the closed-loop system. In the steady-state control process, RBF NNs are verified to satisfy the partial persistent excitation (PE condition. Subsequently, an appropriate state transformation is adopted to achieve the accurate convergence of neural weight estimates. The corresponding experienced knowledge on unknown robotic dynamics is stored in NNs with constant neural weight values. Using the stored knowledge, a static neural learning controller is developed to improve the full-state tracking performance. A comparative simulation study on a 2-link robot illustrates the effectiveness of the proposed scheme.

  17. Investigation of Using Analytics in Promoting Mobile Learning Support

    Science.gov (United States)

    Visali, Videhi; Swami, Niraj

    2013-01-01

    Learning analytics can promote pedagogically informed use of learner data, which can steer the progress of technology mediated learning across several learning contexts. This paper presents the application of analytics to a mobile learning solution and demonstrates how a pedagogical sense was inferred from the data. Further, this inference was…

  18. Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas.

    Science.gov (United States)

    Chang, P; Grinband, J; Weinberg, B D; Bardis, M; Khy, M; Cadena, G; Su, M-Y; Cha, S; Filippi, C G; Bota, D; Baldi, P; Poisson, L M; Jain, R; Chow, D

    2018-05-10

    The World Health Organization has recently placed new emphasis on the integration of genetic information for gliomas. While tissue sampling remains the criterion standard, noninvasive imaging techniques may provide complimentary insight into clinically relevant genetic mutations. Our aim was to train a convolutional neural network to independently predict underlying molecular genetic mutation status in gliomas with high accuracy and identify the most predictive imaging features for each mutation. MR imaging data and molecular information were retrospectively obtained from The Cancer Imaging Archives for 259 patients with either low- or high-grade gliomas. A convolutional neural network was trained to classify isocitrate dehydrogenase 1 ( IDH1 ) mutation status, 1p/19q codeletion, and O6-methylguanine-DNA methyltransferase ( MGMT ) promotor methylation status. Principal component analysis of the final convolutional neural network layer was used to extract the key imaging features critical for successful classification. Classification had high accuracy: IDH1 mutation status, 94%; 1p/19q codeletion, 92%; and MGMT promotor methylation status, 83%. Each genetic category was also associated with distinctive imaging features such as definition of tumor margins, T1 and FLAIR suppression, extent of edema, extent of necrosis, and textural features. Our results indicate that for The Cancer Imaging Archives dataset, machine-learning approaches allow classification of individual genetic mutations of both low- and high-grade gliomas. We show that relevant MR imaging features acquired from an added dimensionality-reduction technique demonstrate that neural networks are capable of learning key imaging components without prior feature selection or human-directed training. © 2018 by American Journal of Neuroradiology.

  19. Biologically plausible learning in neural networks: a lesson from bacterial chemotaxis.

    Science.gov (United States)

    Shimansky, Yury P

    2009-12-01

    Learning processes in the brain are usually associated with plastic changes made to optimize the strength of connections between neurons. Although many details related to biophysical mechanisms of synaptic plasticity have been discovered, it is unclear how the concurrent performance of adaptive modifications in a huge number of spatial locations is organized to minimize a given objective function. Since direct experimental observation of even a relatively small subset of such changes is not feasible, computational modeling is an indispensable investigation tool for solving this problem. However, the conventional method of error back-propagation (EBP) employed for optimizing synaptic weights in artificial neural networks is not biologically plausible. This study based on computational experiments demonstrated that such optimization can be performed rather efficiently using the same general method that bacteria employ for moving closer to an attractant or away from a repellent. With regard to neural network optimization, this method consists of regulating the probability of an abrupt change in the direction of synaptic weight modification according to the temporal gradient of the objective function. Neural networks utilizing this method (regulation of modification probability, RMP) can be viewed as analogous to swimming in the multidimensional space of their parameters in the flow of biochemical agents carrying information about the optimality criterion. The efficiency of RMP is comparable to that of EBP, while RMP has several important advantages. Since the biological plausibility of RMP is beyond a reasonable doubt, the RMP concept provides a constructive framework for the experimental analysis of learning in natural neural networks.

  20. Automated sleep stage detection with a classical and a neural learning algorithm--methodological aspects.

    Science.gov (United States)

    Schwaibold, M; Schöchlin, J; Bolz, A

    2002-01-01

    For classification tasks in biosignal processing, several strategies and algorithms can be used. Knowledge-based systems allow prior knowledge about the decision process to be integrated, both by the developer and by self-learning capabilities. For the classification stages in a sleep stage detection framework, three inference strategies were compared regarding their specific strengths: a classical signal processing approach, artificial neural networks and neuro-fuzzy systems. Methodological aspects were assessed to attain optimum performance and maximum transparency for the user. Due to their effective and robust learning behavior, artificial neural networks could be recommended for pattern recognition, while neuro-fuzzy systems performed best for the processing of contextual information.

  1. The interchangeability of learning rate and gain in backpropagation neural networks

    NARCIS (Netherlands)

    Thimm, G.; Moerland, P.; Fiesler, E.

    1996-01-01

    The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights.

  2. Optimal Search Strategy of Robotic Assembly Based on Neural Vibration Learning

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2011-01-01

    Full Text Available This paper presents implementation of optimal search strategy (OSS in verification of assembly process based on neural vibration learning. The application problem is the complex robot assembly of miniature parts in the example of mating the gears of one multistage planetary speed reducer. Assembly of tube over the planetary gears was noticed as the most difficult problem of overall assembly. The favourable influence of vibration and rotation movement on compensation of tolerance was also observed. With the proposed neural-network-based learning algorithm, it is possible to find extended scope of vibration state parameter. Using optimal search strategy based on minimal distance path between vibration parameter stage sets (amplitude and frequencies of robots gripe vibration and recovery parameter algorithm, we can improve the robot assembly behaviour, that is, allow the fastest possible way of mating. We have verified by using simulation programs that search strategy is suitable for the situation of unexpected events due to uncertainties.

  3. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    Science.gov (United States)

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  4. A role for adult TLX-positive neural stem cells in learning and behaviour.

    Science.gov (United States)

    Zhang, Chun-Li; Zou, Yuhua; He, Weimin; Gage, Fred H; Evans, Ronald M

    2008-02-21

    Neurogenesis persists in the adult brain and can be regulated by a plethora of external stimuli, such as learning, memory, exercise, environment and stress. Although newly generated neurons are able to migrate and preferentially incorporate into the neural network, how these cells are molecularly regulated and whether they are required for any normal brain function are unresolved questions. The adult neural stem cell pool is composed of orphan nuclear receptor TLX-positive cells. Here, using genetic approaches in mice, we demonstrate that TLX (also called NR2E1) regulates adult neural stem cell proliferation in a cell-autonomous manner by controlling a defined genetic network implicated in cell proliferation and growth. Consequently, specific removal of TLX from the adult mouse brain through inducible recombination results in a significant reduction of stem cell proliferation and a marked decrement in spatial learning. In contrast, the resulting suppression of adult neurogenesis does not affect contextual fear conditioning, locomotion or diurnal rhythmic activities, indicating a more selective contribution of newly generated neurons to specific cognitive functions.

  5. Stochastic sensitivity analysis and Langevin simulation for neural network learning

    International Nuclear Information System (INIS)

    Koda, Masato

    1997-01-01

    A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method

  6. Adaptive neural network/expert system that learns fault diagnosis for different structures

    Science.gov (United States)

    Simon, Solomon H.

    1992-08-01

    Corporations need better real-time monitoring and control systems to improve productivity by watching quality and increasing production flexibility. The innovative technology to achieve this goal is evolving in the form artificial intelligence and neural networks applied to sensor processing, fusion, and interpretation. By using these advanced Al techniques, we can leverage existing systems and add value to conventional techniques. Neural networks and knowledge-based expert systems can be combined into intelligent sensor systems which provide real-time monitoring, control, evaluation, and fault diagnosis for production systems. Neural network-based intelligent sensor systems are more reliable because they can provide continuous, non-destructive monitoring and inspection. Use of neural networks can result in sensor fusion and the ability to model highly, non-linear systems. Improved models can provide a foundation for more accurate performance parameters and predictions. We discuss a research software/hardware prototype which integrates neural networks, expert systems, and sensor technologies and which can adapt across a variety of structures to perform fault diagnosis. The flexibility and adaptability of the prototype in learning two structures is presented. Potential applications are discussed.

  7. Incremental learning of perceptual and conceptual representations and the puzzle of neural repetition suppression.

    Science.gov (United States)

    Gotts, Stephen J

    2016-08-01

    Incremental learning models of long-term perceptual and conceptual knowledge hold that neural representations are gradually acquired over many individual experiences via Hebbian-like activity-dependent synaptic plasticity across cortical connections of the brain. In such models, variation in task relevance of information, anatomic constraints, and the statistics of sensory inputs and motor outputs lead to qualitative alterations in the nature of representations that are acquired. Here, the proposal that behavioral repetition priming and neural repetition suppression effects are empirical markers of incremental learning in the cortex is discussed, and research results that both support and challenge this position are reviewed. Discussion is focused on a recent fMRI-adaptation study from our laboratory that shows decoupling of experience-dependent changes in neural tuning, priming, and repetition suppression, with representational changes that appear to work counter to the explicit task demands. Finally, critical experiments that may help to clarify and resolve current challenges are outlined.

  8. Chaos Synchronization Using Adaptive Dynamic Neural Network Controller with Variable Learning Rates

    Directory of Open Access Journals (Sweden)

    Chih-Hong Kao

    2011-01-01

    Full Text Available This paper addresses the synchronization of chaotic gyros with unknown parameters and external disturbance via an adaptive dynamic neural network control (ADNNC system. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic RBF (DRBF network to online approximate an ideal controller. The DRBF network can create new hidden neurons online if the input data falls outside the hidden layer and prune the insignificant hidden neurons online if the hidden neuron is inappropriate. The smooth compensator is designed to compensate for the approximation error between the neural controller and the ideal controller. Moreover, the variable learning rates of the parameter adaptation laws are derived based on a discrete-type Lyapunov function to speed up the convergence rate of the tracking error. Finally, the simulation results which verified the chaotic behavior of two nonlinear identical chaotic gyros can be synchronized using the proposed ADNNC scheme.

  9. Sentiment analysis: a comparison of deep learning neural network algorithm with SVM and naϊve Bayes for Indonesian text

    Science.gov (United States)

    Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia

    2018-03-01

    Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.

  10. Have we met before? Neural correlates of emotional learning in women with social phobia.

    Science.gov (United States)

    Laeger, Inga; Keuper, Kati; Heitmann, Carina; Kugel, Harald; Dobel, Christian; Eden, Annuschka; Arolt, Volker; Zwitserlood, Pienie; Dannlowski, Udo; Zwanzger, Peter

    2014-05-01

    Altered memory processes are thought to be a key mechanism in the etiology of anxiety disorders, but little is known about the neural correlates of fear learning and memory biases in patients with social phobia. The present study therefore examined whether patients with social phobia exhibit different patterns of neural activation when confronted with recently acquired emotional stimuli. Patients with social phobia and a group of healthy controls learned to associate pseudonames with pictures of persons displaying either a fearful or a neutral expression. The next day, participants read the pseudonames in the magnetic resonance imaging scanner. Afterwards, 2 memory tests were carried out. We enrolled 21 patients and 21 controls in our study. There were no group differences for learning performance, and results of the memory tests were mixed. On a neural level, patients showed weaker amygdala activation than controls for the contrast of names previously associated with fearful versus neutral faces. Social phobia severity was negatively related to amygdala activation. Moreover, a detailed psychophysiological interaction analysis revealed an inverse correlation between disorder severity and frontolimbic connectivity for the emotional > neutral pseudonames contrast. Our sample included only women. Our results support the theory of a disturbed cortico limbic interplay, even for recently learned emotional stimuli. We discuss the findings with regard to the vigilance-avoidance theory and contrast them to results indicating an oversensitive limbic system in patients with social phobia.

  11. Supervised learning of probability distributions by neural networks

    Science.gov (United States)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  12. Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks

    Science.gov (United States)

    Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.

    2015-01-01

    The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies

  13. Promotion and the Scholarship of Teaching and Learning

    Science.gov (United States)

    Vardi, Iris; Quin, Robyn

    2011-01-01

    The move toward recognizing teaching academics has resulted in the Scholarship of Teaching and Learning (SoTL) gaining a greater prominence within the academy, particularly through the academic promotions system. With several Australian universities now providing opportunities for teaching staff who do not engage in research to be promoted, it is…

  14. Neural substrates underlying stimulation-enhanced motor skill learning after stroke.

    Science.gov (United States)

    Lefebvre, Stéphanie; Dricot, Laurence; Laloux, Patrice; Gradkowski, Wojciech; Desfontaines, Philippe; Evrard, Frédéric; Peeters, André; Jamart, Jacques; Vandermeeren, Yves

    2015-01-01

    Motor skill learning is one of the key components of motor function recovery after stroke, especially recovery driven by neurorehabilitation. Transcranial direct current stimulation can enhance neurorehabilitation and motor skill learning in stroke patients. However, the neural mechanisms underlying the retention of stimulation-enhanced motor skill learning involving a paretic upper limb have not been resolved. These neural substrates were explored by means of functional magnetic resonance imaging. Nineteen chronic hemiparetic stroke patients participated in a double-blind, cross-over randomized, sham-controlled experiment with two series. Each series consisted of two sessions: (i) an intervention session during which dual transcranial direct current stimulation or sham was applied during motor skill learning with the paretic upper limb; and (ii) an imaging session 1 week later, during which the patients performed the learned motor skill. The motor skill learning task, called the 'circuit game', involves a speed/accuracy trade-off and consists of moving a pointer controlled by a computer mouse along a complex circuit as quickly and accurately as possible. Relative to the sham series, dual transcranial direct current stimulation applied bilaterally over the primary motor cortex during motor skill learning with the paretic upper limb resulted in (i) enhanced online motor skill learning; (ii) enhanced 1-week retention; and (iii) superior transfer of performance improvement to an untrained task. The 1-week retention's enhancement driven by the intervention was associated with a trend towards normalization of the brain activation pattern during performance of the learned motor skill relative to the sham series. A similar trend towards normalization relative to sham was observed during performance of a simple, untrained task without a speed/accuracy constraint, despite a lack of behavioural difference between the dual transcranial direct current stimulation and sham

  15. Modeling the behavioral substrates of associate learning and memory - Adaptive neural models

    Science.gov (United States)

    Lee, Chuen-Chien

    1991-01-01

    Three adaptive single-neuron models based on neural analogies of behavior modification episodes are proposed, which attempt to bridge the gap between psychology and neurophysiology. The proposed models capture the predictive nature of Pavlovian conditioning, which is essential to the theory of adaptive/learning systems. The models learn to anticipate the occurrence of a conditioned response before the presence of a reinforcing stimulus when training is complete. Furthermore, each model can find the most nonredundant and earliest predictor of reinforcement. The behavior of the models accounts for several aspects of basic animal learning phenomena in Pavlovian conditioning beyond previous related models. Computer simulations show how well the models fit empirical data from various animal learning paradigms.

  16. Learning Orthographic Structure With Sequential Generative Neural Networks.

    Science.gov (United States)

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  17. Self-teaching neural network learns difficult reactor control problem

    International Nuclear Information System (INIS)

    Jouse, W.C.

    1989-01-01

    A self-teaching neural network used as an adaptive controller quickly learns to control an unstable reactor configuration. The network models the behavior of a human operator. It is trained by allowing it to operate the reactivity control impulsively. It is punished whenever either the power or fuel temperature stray outside technical limits. Using a simple paradigm, the network constructs an internal representation of the punishment and of the reactor system. The reactor is constrained to small power orbits

  18. Learning Errors by Radial Basis Function Neural Networks and Regularization Networks

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman; Vidnerová, Petra

    2009-01-01

    Roč. 1, č. 2 (2009), s. 49-57 ISSN 2005-4262 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : neural network * RBF networks * regularization * learning Subject RIV: IN - Informatics, Computer Science http://www.sersc.org/journals/IJGDC/vol2_no1/5.pdf

  19. Monetary incentives at retrieval promote recognition of involuntarily learned emotional information.

    Science.gov (United States)

    Yan, Chunping; Li, Yunyun; Zhang, Qin; Cui, Lixia

    2018-03-07

    Previous studies have suggested that the effects of reward on memory processes are affected by certain factors, but it remains unclear whether the effects of reward at retrieval on recognition processes are influenced by emotion. The event-related potential was used to investigate the combined effect of reward and emotion on memory retrieval and its neural mechanism. The behavioral results indicated that the reward at retrieval improved recognition performance under positive and negative emotional conditions. The event-related potential results indicated that there were significant interactions between the reward and emotion in the average amplitude during recognition, and the significant reward effects from the frontal to parietal brain areas appeared at 130-800 ms for positive pictures and at 190-800 ms for negative pictures, but there were no significant reward effects of neutral pictures; the reward effect of positive items appeared relatively earlier, starting at 130 ms, and that of negative pictures began at 190 ms. These results indicate that monetary incentives at retrieval promote recognition of involuntarily learned emotional information.

  20. Accelerating learning of neural networks with conjugate gradients for nuclear power plant applications

    International Nuclear Information System (INIS)

    Reifman, J.; Vitela, J.E.

    1994-01-01

    The method of conjugate gradients is used to expedite the learning process of feedforward multilayer artificial neural networks and to systematically update both the learning parameter and the momentum parameter at each training cycle. The mechanism for the occurrence of premature saturation of the network nodes observed with the back propagation algorithm is described, suggestions are made to eliminate this undesirable phenomenon, and the reason by which this phenomenon is precluded in the method of conjugate gradients is presented. The proposed method is compared with the standard back propagation algorithm in the training of neural networks to classify transient events in neural power plants simulated by the Midland Nuclear Power Plant Unit 2 simulator. The comparison results indicate that the rate of convergence of the proposed method is much greater than the standard back propagation, that it reduces both the number of training cycles and the CPU time, and that it is less sensitive to the choice of initial weights. The advantages of the method are more noticeable and important for problems where the network architecture consists of a large number of nodes, the training database is large, and a tight convergence criterion is desired

  1. Neural signals of vicarious extinction learning.

    Science.gov (United States)

    Golkar, Armita; Haaker, Jan; Selbing, Ida; Olsson, Andreas

    2016-10-01

    Social transmission of both threat and safety is ubiquitous, but little is known about the neural circuitry underlying vicarious safety learning. This is surprising given that these processes are critical to flexibly adapt to a changeable environment. To address how the expression of previously learned fears can be modified by the transmission of social information, two conditioned stimuli (CS + s) were paired with shock and the third was not. During extinction, we held constant the amount of direct, non-reinforced, exposure to the CSs (i.e. direct extinction), and critically varied whether another individual-acting as a demonstrator-experienced safety (CS + vic safety) or aversive reinforcement (CS + vic reinf). During extinction, ventromedial prefrontal cortex (vmPFC) responses to the CS + vic reinf increased but decreased to the CS + vic safety This pattern of vmPFC activity was reversed during a subsequent fear reinstatement test, suggesting a temporal shift in the involvement of the vmPFC. Moreover, only the CS + vic reinf association recovered. Our data suggest that vicarious extinction prevents the return of conditioned fear responses, and that this efficacy is reflected by diminished vmPFC involvement during extinction learning. The present findings may have important implications for understanding how social information influences the persistence of fear memories in individuals suffering from emotional disorders. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  2. Learning to read words in a new language shapes the neural organization of the prior languages.

    Science.gov (United States)

    Mei, Leilei; Xue, Gui; Lu, Zhong-Lin; Chen, Chuansheng; Zhang, Mingxia; He, Qinghua; Wei, Miao; Dong, Qi

    2014-12-01

    Learning a new language entails interactions with one׳s prior language(s). Much research has shown how native language affects the cognitive and neural mechanisms of a new language, but little is known about whether and how learning a new language shapes the neural mechanisms of prior language(s). In two experiments in the current study, we used an artificial language training paradigm in combination with an fMRI to examine (1) the effects of different linguistic components (phonology and semantics) of a new language on the neural process of prior languages (i.e., native and second languages), and (2) whether such effects were modulated by the proficiency level in the new language. Results of Experiment 1 showed that when the training in a new language involved semantics (as opposed to only visual forms and phonology), neural activity during word reading in the native language (Chinese) was reduced in several reading-related regions, including the left pars opercularis, pars triangularis, bilateral inferior temporal gyrus, fusiform gyrus, and inferior occipital gyrus. Results of Experiment 2 replicated the results of Experiment 1 and further found that semantic training also affected neural activity during word reading in the subjects׳ second language (English). Furthermore, we found that the effects of the new language were modulated by the subjects׳ proficiency level in the new language. These results provide critical imaging evidence for the influence of learning to read words in a new language on word reading in native and second languages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. The Culture of Learning Continuum: Promoting Internal Values in Higher Education

    Science.gov (United States)

    Sagy, Ornit; Kali, Yael; Tsaushu, Masha; Tal, Tali

    2018-01-01

    This study endeavors to identify ways to promote a productive learning culture in higher education. Specifically, we sought to encourage development of internal values in students' culture of learning and examine how this can promote their understanding of scientific content. Set in a high enrollment undergraduate biology course, we designed a…

  4. Identifying beneficial task relations for multi-task learning in deep neural networks

    DEFF Research Database (Denmark)

    Bingel, Joachim; Søgaard, Anders

    2017-01-01

    Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP...

  5. Employing Wikibook Project in a Linguistics Course to Promote Peer Teaching and Learning

    Science.gov (United States)

    Wang, Lixun

    2016-01-01

    Peer teaching and learning are learner-centred approaches with great potential for promoting effective learning, and the fast development of Web 2.0 technology has opened new doors for promoting peer teaching and learning. In this study, we aim to establish peer teaching and learning among students by employing a Wikibook project in the course…

  6. Using c-Jun to identify fear extinction learning-specific patterns of neural activity that are affected by single prolonged stress.

    Science.gov (United States)

    Knox, Dayan; Stanfield, Briana R; Staib, Jennifer M; David, Nina P; DePietro, Thomas; Chamness, Marisa; Schneider, Elizabeth K; Keller, Samantha M; Lawless, Caroline

    2018-04-02

    Neural circuits via which stress leads to disruptions in fear extinction is often explored in animal stress models. Using the single prolonged stress (SPS) model of post traumatic stress disorder and the immediate early gene (IEG) c-Fos as a measure of neural activity, we previously identified patterns of neural activity through which SPS disrupts extinction retention. However, none of these stress effects were specific to fear or extinction learning and memory. C-Jun is another IEG that is sometimes regulated in a different manner to c-Fos and could be used to identify emotional learning/memory specific patterns of neural activity that are sensitive to SPS. Animals were either fear conditioned (CS-fear) or presented with CSs only (CS-only) then subjected to extinction training and testing. C-Jun was then assayed within neural substrates critical for extinction memory. Inhibited c-Jun levels in the hippocampus (Hipp) and enhanced functional connectivity between the ventromedial prefrontal cortex (vmPFC) and basolateral amygdala (BLA) during extinction training was disrupted by SPS in the CS-fear group only. As a result, these effects were specific to emotional learning/memory. SPS also disrupted inhibited Hipp c-Jun levels, enhanced BLA c-Jun levels, and altered functional connectivity among the vmPFC, BLA, and Hipp during extinction testing in SPS rats in the CS-fear and CS-only groups. As a result, these effects were not specific to emotional learning/memory. Our findings suggest that SPS disrupts neural activity specific to extinction memory, but may also disrupt the retention of fear extinction by mechanisms that do not involve emotional learning/memory. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Relabeling exchange method (REM) for learning in neural networks

    Science.gov (United States)

    Wu, Wen; Mammone, Richard J.

    1994-02-01

    The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

  8. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    Science.gov (United States)

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  9. From phonemes to images : levels of representation in a recurrent neural model of visually-grounded language learning

    NARCIS (Netherlands)

    Gelderloos, L.J.; Chrupala, Grzegorz

    2016-01-01

    We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover

  10. Neural Pattern Similarity in the Left IFG and Fusiform Is Associated with Novel Word Learning

    Directory of Open Access Journals (Sweden)

    Jing Qu

    2017-08-01

    Full Text Available Previous studies have revealed that greater neural pattern similarity across repetitions is associated with better subsequent memory. In this study, we used an artificial language training paradigm and representational similarity analysis to examine whether neural pattern similarity across repetitions before training was associated with post-training behavioral performance. Twenty-four native Chinese speakers were trained to learn a logographic artificial language for 12 days and behavioral performance was recorded using the word naming and picture naming tasks. Participants were scanned while performing a passive viewing task before training, after 4-day training and after 12-day training. Results showed that pattern similarity in the left pars opercularis (PO and fusiform gyrus (FG before training was negatively associated with reaction time (RT in both word naming and picture naming tasks after training. These results suggest that neural pattern similarity is an effective neurofunctional predictor of novel word learning in addition to word memory.

  11. Neural Pattern Similarity in the Left IFG and Fusiform Is Associated with Novel Word Learning

    Science.gov (United States)

    Qu, Jing; Qian, Liu; Chen, Chuansheng; Xue, Gui; Li, Huiling; Xie, Peng; Mei, Leilei

    2017-01-01

    Previous studies have revealed that greater neural pattern similarity across repetitions is associated with better subsequent memory. In this study, we used an artificial language training paradigm and representational similarity analysis to examine whether neural pattern similarity across repetitions before training was associated with post-training behavioral performance. Twenty-four native Chinese speakers were trained to learn a logographic artificial language for 12 days and behavioral performance was recorded using the word naming and picture naming tasks. Participants were scanned while performing a passive viewing task before training, after 4-day training and after 12-day training. Results showed that pattern similarity in the left pars opercularis (PO) and fusiform gyrus (FG) before training was negatively associated with reaction time (RT) in both word naming and picture naming tasks after training. These results suggest that neural pattern similarity is an effective neurofunctional predictor of novel word learning in addition to word memory. PMID:28878640

  12. Critical-Inquiry-Based-Learning: Model of Learning to Promote Critical Thinking Ability of Pre-service Teachers

    Science.gov (United States)

    Prayogi, S.; Yuanita, L.; Wasis

    2018-01-01

    This study aimed to develop Critical-Inquiry-Based-Learning (CIBL) learning model to promote critical thinking (CT) ability of preservice teachers. The CIBL learning model was developed by meeting the criteria of validity, practicality, and effectiveness. Validation of the model involves 4 expert validators through the mechanism of the focus group discussion (FGD). CIBL learning model declared valid to promote CT ability, with the validity level (Va) of 4.20 and reliability (r) of 90,1% (very reliable). The practicality of the model was evaluated when it was implemented that involving 17 of preservice teachers. The CIBL learning model had been declared practice, its measuring from learning feasibility (LF) with very good criteria (LF-score = 4.75). The effectiveness of the model was evaluated from the improvement CT ability after the implementation of the model. CT ability were evaluated using the scoring technique adapted from Ennis-Weir Critical Thinking Essay Test. The average score of CT ability on pretest is - 1.53 (uncritical criteria), whereas on posttest is 8.76 (critical criteria), with N-gain score of 0.76 (high criteria). Based on the results of this study, it can be concluded that developed CIBL learning model is feasible to promote CT ability of preservice teachers.

  13. Genetic deletion of Rnd3 in neural stem cells promotes proliferation via upregulation of Notch signaling.

    Science.gov (United States)

    Dong, Huimin; Lin, Xi; Li, Yuntao; Hu, Ronghua; Xu, Yang; Guo, Xiaojie; La, Qiong; Wang, Shun; Fang, Congcong; Guo, Junli; Li, Qi; Mao, Shanping; Liu, Baohui

    2017-10-31

    Rnd3, a Rho GTPase, is involved in the inhibition of actin cytoskeleton dynamics through the Rho kinase-dependent signaling pathway. We previously demonstrated that mice with genetic deletion of Rnd3 developed a markedly larger brain compared with wild-type mice. Here, we demonstrate that Rnd3 knockout mice developed an enlarged subventricular zone, and we identify a novel role for Rnd3 as an inhibitor of Notch signaling in neural stem cells. Rnd3 deficiency, both in vivo and in vitro , resulted in increased levels of Notch intracellular domain protein. This led to enhanced Notch signaling and promotion of aberrant neural stem cell growth, thereby resulting in a larger subventricular zone and a markedly larger brain. Inhibition of Notch activity abrogated this aberrant neural stem cell growth.

  14. Identification and prediction of dynamic systems using an interactively recurrent self-evolving fuzzy neural network.

    Science.gov (United States)

    Lin, Yang-Yin; Chang, Jyh-Yeong; Lin, Chin-Teng

    2013-02-01

    This paper presents a novel recurrent fuzzy neural network, called an interactively recurrent self-evolving fuzzy neural network (IRSFNN), for prediction and identification of dynamic systems. The recurrent structure in an IRSFNN is formed as an external loops and internal feedback by feeding the rule firing strength of each rule to others rules and itself. The consequent part in the IRSFNN is composed of a Takagi-Sugeno-Kang (TSK) or functional-link-based type. The proposed IRSFNN employs a functional link neural network (FLNN) to the consequent part of fuzzy rules for promoting the mapping ability. Unlike a TSK-type fuzzy neural network, the FLNN in the consequent part is a nonlinear function of input variables. An IRSFNNs learning starts with an empty rule base and all of the rules are generated and learned online through a simultaneous structure and parameter learning. An on-line clustering algorithm is effective in generating fuzzy rules. The consequent update parameters are derived by a variable-dimensional Kalman filter algorithm. The premise and recurrent parameters are learned through a gradient descent algorithm. We test the IRSFNN for the prediction and identification of dynamic plants and compare it to other well-known recurrent FNNs. The proposed model obtains enhanced performance results.

  15. Application of artificial neural network with extreme learning machine for economic growth estimation

    Science.gov (United States)

    Milačić, Ljubiša; Jović, Srđan; Vujović, Tanja; Miljković, Jovica

    2017-01-01

    The purpose of this research is to develop and apply the artificial neural network (ANN) with extreme learning machine (ELM) to forecast gross domestic product (GDP) growth rate. The economic growth forecasting was analyzed based on agriculture, manufacturing, industry and services value added in GDP. The results were compared with ANN with back propagation (BP) learning approach since BP could be considered as conventional learning methodology. The reliability of the computational models was accessed based on simulation results and using several statistical indicators. Based on results, it was shown that ANN with ELM learning methodology can be applied effectively in applications of GDP forecasting.

  16. Differences between Neural Activity in Prefrontal Cortex and Striatum during Learning of Novel Abstract Categories

    OpenAIRE

    Antzoulatos, Evan G.; Miller, Earl K.

    2011-01-01

    Learning to classify diverse experiences into meaningful groups, like categories, is fundamental to normal cognition. To understand its neural basis, we simultaneously recorded from multiple electrodes in the lateral prefrontal cortex and dorsal striatum, two interconnected brain structures critical for learning. Each day, monkeys learned to associate novel, abstract dot-based categories with a right vs. left saccade. Early on, when they could acquire specific stimulus-response associations, ...

  17. Three-dimensional neural net for learning visuomotor coordination of a robot arm.

    Science.gov (United States)

    Martinetz, T M; Ritter, H J; Schulten, K J

    1990-01-01

    An extension of T. Kohonen's (1982) self-organizing mapping algorithm together with an error-correction scheme based on the Widrow-Hoff learning rule is applied to develop a learning algorithm for the visuomotor coordination of a simulated robot arm. Learning occurs by a sequence of trial movements without the need for an external teacher. Using input signals from a pair of cameras, the closed robot arm system is able to reduce its positioning error to about 0.3% of the linear dimensions of its work space. This is achieved by choosing the connectivity of a three-dimensional lattice consisting of the units of the neural net.

  18. Dynamics of neural cryptography.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  19. Dynamics of neural cryptography

    International Nuclear Information System (INIS)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-01-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible

  20. Dynamics of neural cryptography

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  1. A neural fuzzy controller learning by fuzzy error propagation

    Science.gov (United States)

    Nauck, Detlef; Kruse, Rudolf

    1992-01-01

    In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.

  2. Failing to learn from negative prediction errors: Obesity is associated with alterations in a fundamental neural learning mechanism.

    Science.gov (United States)

    Mathar, David; Neumann, Jane; Villringer, Arno; Horstmann, Annette

    2017-10-01

    Prediction errors (PEs) encode the difference between expected and actual action outcomes in the brain via dopaminergic modulation. Integration of these learning signals ensures efficient behavioral adaptation. Obesity has recently been linked to altered dopaminergic fronto-striatal circuits, thus implying impairments in cognitive domains that rely on its integrity. 28 obese and 30 lean human participants performed an implicit stimulus-response learning paradigm inside an fMRI scanner. Computational modeling and psycho-physiological interaction (PPI) analysis was utilized for assessing PE-related learning and associated functional connectivity. We show that human obesity is associated with insufficient incorporation of negative PEs into behavioral adaptation even in a non-food context, suggesting differences in a fundamental neural learning mechanism. Obese subjects were less efficient in using negative PEs to improve implicit learning performance, despite proper coding of PEs in striatum. We further observed lower functional coupling between ventral striatum and supplementary motor area in obese subjects subsequent to negative PEs. Importantly, strength of functional coupling predicted task performance and negative PE utilization. These findings show that obesity is linked to insufficient behavioral adaptation specifically in response to negative PEs, and to associated alterations in function and connectivity within the fronto-striatal system. Recognition of neural differences as a central characteristic of obesity hopefully paves the way to rethink established intervention strategies: Differential behavioral sensitivity to negative and positive PEs should be considered when designing intervention programs. Measures relying on penalization of unwanted behavior may prove less effective in obese subjects than alternative approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Spiking neural networks for handwritten digit recognition-Supervised learning and network optimization.

    Science.gov (United States)

    Kulkarni, Shruti R; Rajendran, Bipin

    2018-07-01

    We demonstrate supervised learning in Spiking Neural Networks (SNNs) for the problem of handwritten digit recognition using the spike triggered Normalized Approximate Descent (NormAD) algorithm. Our network that employs neurons operating at sparse biological spike rates below 300Hz achieves a classification accuracy of 98.17% on the MNIST test database with four times fewer parameters compared to the state-of-the-art. We present several insights from extensive numerical experiments regarding optimization of learning parameters and network configuration to improve its accuracy. We also describe a number of strategies to optimize the SNN for implementation in memory and energy constrained hardware, including approximations in computing the neuronal dynamics and reduced precision in storing the synaptic weights. Experiments reveal that even with 3-bit synaptic weights, the classification accuracy of the designed SNN does not degrade beyond 1% as compared to the floating-point baseline. Further, the proposed SNN, which is trained based on the precise spike timing information outperforms an equivalent non-spiking artificial neural network (ANN) trained using back propagation, especially at low bit precision. Thus, our study shows the potential for realizing efficient neuromorphic systems that use spike based information encoding and learning for real-world applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Neural dynamics of learning sound-action associations.

    Directory of Open Access Journals (Sweden)

    Adam McNamara

    Full Text Available A motor component is pre-requisite to any communicative act as one must inherently move to communicate. To learn to make a communicative act, the brain must be able to dynamically associate arbitrary percepts to the neural substrate underlying the pre-requisite motor activity. We aimed to investigate whether brain regions involved in complex gestures (ventral pre-motor cortex, Brodmann Area 44 were involved in mediating association between novel abstract auditory stimuli and novel gestural movements. In a functional resonance imaging (fMRI study we asked participants to learn associations between previously unrelated novel sounds and meaningless gestures inside the scanner. We use functional connectivity analysis to eliminate the often present confound of 'strategic covert naming' when dealing with BA44 and to rule out effects of non-specific reductions in signal. Brodmann Area 44, a region incorporating Broca's region showed strong, bilateral, negative correlation of BOLD (blood oxygen level dependent response with learning of sound-action associations during data acquisition. Left-inferior-parietal-lobule (l-IPL and bilateral loci in and around visual area V5, right-orbital-frontal-gyrus, right-hippocampus, left-para-hippocampus, right-head-of-caudate, right-insula and left-lingual-gyrus also showed decreases in BOLD response with learning. Concurrent with these decreases in BOLD response, an increasing connectivity between areas of the imaged network as well as the right-middle-frontal-gyrus with rising learning performance was revealed by a psychophysiological interaction (PPI analysis. The increasing connectivity therefore occurs within an increasingly energy efficient network as learning proceeds. Strongest learning related connectivity between regions was found when analysing BA44 and l-IPL seeds. The results clearly show that BA44 and l-IPL is dynamically involved in linking gesture and sound and therefore provides evidence that one of

  5. Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

    OpenAIRE

    Shen, Li; Lin, Zhouchen; Huang, Qingming

    2015-01-01

    Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015...

  6. Neural network representation and learning of mappings and their derivatives

    Science.gov (United States)

    White, Halbert; Hornik, Kurt; Stinchcombe, Maxwell; Gallant, A. Ronald

    1991-01-01

    Discussed here are recent theorems proving that artificial neural networks are capable of approximating an arbitrary mapping and its derivatives as accurately as desired. This fact forms the basis for further results establishing the learnability of the desired approximations, using results from non-parametric statistics. These results have potential applications in robotics, chaotic dynamics, control, and sensitivity analysis. An example involving learning the transfer function and its derivatives for a chaotic map is discussed.

  7. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  8. Possible promotion of neuronal differentiation in fetal rat brain neural progenitor cells after sustained exposure to static magnetism.

    Science.gov (United States)

    Nakamichi, Noritaka; Ishioka, Yukichi; Hirai, Takao; Ozawa, Shusuke; Tachibana, Masaki; Nakamura, Nobuhiro; Takarada, Takeshi; Yoneda, Yukio

    2009-08-15

    We have previously shown significant potentiation of Ca(2+) influx mediated by N-methyl-D-aspartate receptors, along with decreased microtubules-associated protein-2 (MAP2) expression, in hippocampal neurons cultured under static magnetism without cell death. In this study, we investigated the effects of static magnetism on the functionality of neural progenitor cells endowed to proliferate for self-replication and differentiate into neuronal, astroglial, and oligodendroglial lineages. Neural progenitor cells were isolated from embryonic rat neocortex and hippocampus, followed by culture under static magnetism at 100 mT and subsequent determination of the number of cells immunoreactive for a marker protein of particular progeny lineages. Static magnetism not only significantly decreased proliferation of neural progenitor cells without affecting cell viability, but also promoted differentiation into cells immunoreactive for MAP2 with a concomitant decrease in that for an astroglial marker, irrespective of the presence of differentiation inducers. In neural progenitors cultured under static magnetism, a significant increase was seen in mRNA expression of several activator-type proneural genes, such as Mash1, Math1, and Math3, together with decreased mRNA expression of the repressor type Hes5. These results suggest that sustained static magnetism could suppress proliferation for self-renewal and facilitate differentiation into neurons through promoted expression of activator-type proneural genes by progenitor cells in fetal rat brain.

  9. Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them

    Science.gov (United States)

    Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.

    2011-01-01

    Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…

  10. Uncovering the neural mechanisms underlying learning from tests.

    Directory of Open Access Journals (Sweden)

    Xiaonan L Liu

    Full Text Available People learn better when re-study opportunities are replaced with tests. While researchers have begun to speculate on why testing is superior to study, few studies have directly examined the neural underpinnings of this effect. In this fMRI study, participants engaged in a study phase to learn arbitrary word pairs, followed by a cued recall test (recall second half of pair when cued with first word of pair, re-study of each pair, and finally another cycle of cued recall tests. Brain activation patterns during the first test (recall of the studied pairs predicts performance on the second test. Importantly, while subsequent memory analyses of encoding trials also predict later accuracy, the brain regions involved in predicting later memory success are more extensive for activity during retrieval (testing than during encoding (study. Those additional regions that predict subsequent memory based on their activation at test but not at encoding may be key to understanding the basis of the testing effect.

  11. Using repetitive transcranial magnetic stimulation to study the underlying neural mechanisms of human motor learning and memory.

    Science.gov (United States)

    Censor, Nitzan; Cohen, Leonardo G

    2011-01-01

    In the last two decades, there has been a rapid development in the research of the physiological brain mechanisms underlying human motor learning and memory. While conventional memory research performed on animal models uses intracellular recordings, microfusion of protein inhibitors to specific brain areas and direct induction of focal brain lesions, human research has so far utilized predominantly behavioural approaches and indirect measurements of neural activity. Repetitive transcranial magnetic stimulation (rTMS), a safe non-invasive brain stimulation technique, enables the study of the functional role of specific cortical areas by evaluating the behavioural consequences of selective modulation of activity (excitation or inhibition) on memory generation and consolidation, contributing to the understanding of the neural substrates of motor learning. Depending on the parameters of stimulation, rTMS can also facilitate learning processes, presumably through purposeful modulation of excitability in specific brain regions. rTMS has also been used to gain valuable knowledge regarding the timeline of motor memory formation, from initial encoding to stabilization and long-term retention. In this review, we summarize insights gained using rTMS on the physiological and neural mechanisms of human motor learning and memory. We conclude by suggesting possible future research directions, some with direct clinical implications.

  12. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  13. Statistical Discriminability Estimation for Pattern Classification Based on Neural Incremental Attribute Learning

    DEFF Research Database (Denmark)

    Wang, Ting; Guan, Sheng-Uei; Puthusserypady, Sadasivan

    2014-01-01

    Feature ordering is a significant data preprocessing method in Incremental Attribute Learning (IAL), a novel machine learning approach which gradually trains features according to a given order. Previous research has shown that, similar to feature selection, feature ordering is also important based...... estimation. Moreover, a criterion that summarizes all the produced values of AD is employed with a GA (Genetic Algorithm)-based approach to obtain the optimum feature ordering for classification problems based on neural networks by means of IAL. Compared with the feature ordering obtained by other approaches...

  14. Neural Control and Adaptive Neural Forward Models for Insect-like, Energy-Efficient, and Adaptable Locomotion of Walking Machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast...... on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models...... allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show...

  15. Promoting sustainable living in the borderless world through blended learning platforms

    Directory of Open Access Journals (Sweden)

    Khar Thoe Ng

    2013-11-01

    Full Text Available Student-centred learning approaches like collaborative learning are needed to facilitate meaningful learning among self-motivated lifelong learners within educational institutions through interorganizational Open and Distant Learning (ODL approaches. The purpose of this study is to develop blended learning platforms to promote sustainable living, building on an e-hub with sub-portals in SEARCH to facilitate activities such as “Education for Sustainable Development” (ESD, webinars, authentic learning, and the role of m-/e-learning. Survey questionnaires and mixed-research approach with mixed-mode of data analysis were used including some survey findings of in-service teachers’ understanding and attitudes towards ESD and three essential skills for sustainable living. Case studies were reported in telecollaborative project on “Disaster Risk Reduction Education” (DR RED in Malaysia, Germany and Philippines. These activities were organized internationally to facilitate communication through e-platforms among participants across national borders using digital tools to build relationships, promote students’ Higher Order Thinking (HOT skills and innate ability to learn independently.

  16. Construction of Neural Networks for Realization of Localized Deep Learning

    Directory of Open Access Journals (Sweden)

    Charles K. Chui

    2018-05-01

    Full Text Available The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines. However, theoretical development of deep learning is still at its infancy. The objective of this paper is to introduce a deep neural network (also called deep-net approach to localized manifold learning, with each hidden layer endowed with a specific learning task. For the purpose of illustrations, we only focus on deep-nets with three hidden layers, with the first layer for dimensionality reduction, the second layer for bias reduction, and the third layer for variance reduction. A feedback component is also designed to deal with outliers. The main theoretical result in this paper is the order O(m-2s/(2s+d of approximation of the regression function with regularity s, in terms of the number m of sample points, where the (unknown manifold dimension d replaces the dimension D of the sampling (Euclidean space for shallow nets.

  17. Transfer Learning with Convolutional Neural Networks for SAR Ship Recognition

    Science.gov (United States)

    Zhang, Di; Liu, Jia; Heng, Wang; Ren, Kaijun; Song, Junqiang

    2018-03-01

    Ship recognition is the backbone of marine surveillance systems. Recent deep learning methods, e.g. Convolutional Neural Networks (CNNs), have shown high performance for optical images. Learning CNNs, however, requires a number of annotated samples to estimate numerous model parameters, which prevents its application to Synthetic Aperture Radar (SAR) images due to the limited annotated training samples. Transfer learning has been a promising technique for applications with limited data. To this end, a novel SAR ship recognition method based on CNNs with transfer learning has been developed. In this work, we firstly start with a CNNs model that has been trained in advance on Moving and Stationary Target Acquisition and Recognition (MSTAR) database. Next, based on the knowledge gained from this image recognition task, we fine-tune the CNNs on a new task to recognize three types of ships in the OpenSARShip database. The experimental results show that our proposed approach can obviously increase the recognition rate comparing with the result of merely applying CNNs. In addition, compared to existing methods, the proposed method proves to be very competitive and can learn discriminative features directly from training data instead of requiring pre-specification or pre-selection manually.

  18. Differential neural substrates of working memory and cognitive skill learning in healthy young volunteers

    International Nuclear Information System (INIS)

    Cho, Sang Soo; Lee, Eun Ju; Yoon, Eun Jin; Kim, Yu Kyeong; Lee, Won Woo; Kim, Sang Eun

    2005-01-01

    It is known that different neural circuits are involved in working memory and cognitive skill learning that represent explicit and implicit memory functions, respectively. In the present study, we investigated the metabolic correlates of working memory and cognitive skill learning with correlation analysis of FDG PET images. Fourteen right-handed healthy subjects (age, 24 ± 2 yr; 5 males and 9 females) underwent brain FDG PET and neuropsychological testing. Two-back task and weather prediction task were used for the evaluation of working memory and cognitive skill learning, respectively, Correlation between regional glucose metabolism and cognitive task performance was examined using SPM99. A significant positive correlation between 2-back task performance and regional glucose metabolism was found in the prefrontal regions and superior temporal gyri bilaterally. In the first term of weather prediction task the task performance correlated positively with glucose metabolism in the bilateral prefrontal areas, left middle temporal and posterior cingulate gyri, and left thalamus. In the second and third terms of the task, the correlation found in the prefrontal areas, superior temporal and anterior cingulate gyri bilaterally, right insula, left parahippocampal gyrus, and right caudate nucleus. We identified the neural substrates that are related with performance of working memory and cognitive skill learning. These results indicate that brain regions associated with the explicit memory system are recruited in early periods of cognitive skill learning, but additional brain regions including caudate nucleus are involved in late periods of cognitive skill learning

  19. Differential neural substrates of working memory and cognitive skill learning in healthy young volunteers

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sang Soo; Lee, Eun Ju; Yoon, Eun Jin; Kim, Yu Kyeong; Lee, Won Woo; Kim, Sang Eun [Seoul National Univ. College of Medicine, Seoul (Korea, Republic of)

    2005-07-01

    It is known that different neural circuits are involved in working memory and cognitive skill learning that represent explicit and implicit memory functions, respectively. In the present study, we investigated the metabolic correlates of working memory and cognitive skill learning with correlation analysis of FDG PET images. Fourteen right-handed healthy subjects (age, 24 {+-} 2 yr; 5 males and 9 females) underwent brain FDG PET and neuropsychological testing. Two-back task and weather prediction task were used for the evaluation of working memory and cognitive skill learning, respectively, Correlation between regional glucose metabolism and cognitive task performance was examined using SPM99. A significant positive correlation between 2-back task performance and regional glucose metabolism was found in the prefrontal regions and superior temporal gyri bilaterally. In the first term of weather prediction task the task performance correlated positively with glucose metabolism in the bilateral prefrontal areas, left middle temporal and posterior cingulate gyri, and left thalamus. In the second and third terms of the task, the correlation found in the prefrontal areas, superior temporal and anterior cingulate gyri bilaterally, right insula, left parahippocampal gyrus, and right caudate nucleus. We identified the neural substrates that are related with performance of working memory and cognitive skill learning. These results indicate that brain regions associated with the explicit memory system are recruited in early periods of cognitive skill learning, but additional brain regions including caudate nucleus are involved in late periods of cognitive skill learning.

  20. Promotion of self-regulated learning in classrooms : investigating frequency, quality, and consequences for student performance

    NARCIS (Netherlands)

    Kistner, Saskia; Rakoczy, Katrin; Otto, Barbara; Dignath -van Ewijk, Charlotte; Buettner, Gerhard; Klieme, Eckhard

    An implication of the current research on self-regulation is to implement the promotion of self-regulated learning in schools. Teachers can promote self-regulated learning either directly by teaching learning strategies or indirectly by arranging a learning environment that enables students to

  1. DeepNet: An Ultrafast Neural Learning Code for Seismic Imaging

    International Nuclear Information System (INIS)

    Barhen, J.; Protopopescu, V.; Reister, D.

    1999-01-01

    A feed-forward multilayer neural net is trained to learn the correspondence between seismic data and well logs. The introduction of a virtual input layer, connected to the nominal input layer through a special nonlinear transfer function, enables ultrafast (single iteration), near-optimal training of the net using numerical algebraic techniques. A unique computer code, named DeepNet, has been developed, that has achieved, in actual field demonstrations, results unattainable to date with industry standard tools

  2. Deciphering the Cognitive and Neural Mechanisms Underlying ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Deciphering the Cognitive and Neural Mechanisms Underlying Auditory Learning. This project seeks to understand the brain mechanisms necessary for people to learn to perceive sounds. Neural circuits and learning. The research team will test people with and without musical training to evaluate their capacity to learn ...

  3. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  4. Neural principles of memory and a neural theory of analogical insight

    Science.gov (United States)

    Lawson, David I.; Lawson, Anton E.

    1993-12-01

    Grossberg's principles of neural modeling are reviewed and extended to provide a neural level theory to explain how analogies greatly increase the rate of learning and can, in fact, make learning and retention possible. In terms of memory, the key point is that the mind is able to recognize and recall when it is able to match sensory input from new objects, events, or situations with past memory records of similar objects, events, or situations. When a match occurs, an adaptive resonance is set up in which the synaptic strengths of neurons are increased; thus a long term record of the new input is formed in memory. Systems of neurons called outstars and instars are presumably the underlying units that enable this to occur. Analogies can greatly facilitate learning and retention because they activate the outstars (i.e., the cells that are sampling the to-be-learned pattern) and cause the neural activity to grow exponentially by forming feedback loops. This increased activity insures the boost in synaptic strengths of neurons, thus causing storage and retention in long-term memory (i.e., learning).

  5. Electroacupuncture Promotes Proliferation of Amplifying Neural Progenitors and Preserves Quiescent Neural Progenitors from Apoptosis to Alleviate Depressive-Like and Anxiety-Like Behaviours

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2014-01-01

    Full Text Available The present study was designed to investigate the effects of electroacupuncture (EA on depressive-like and anxiety-like behaviours and neural progenitors in the hippocampal dentate gyrus (DG in a chronic unpredictable stress (CUS rat model of depression. After being exposed to a CUS procedure for 2 weeks, rats were subjected to EA treatment, which was performed on acupoints Du-20 (Bai-Hui and GB-34 (Yang-Ling-Quan, once every other day for 15 consecutive days (including 8 treatments, with each treatment lasting for 30 min. The behavioural tests (i.e., forced swimming test, elevated plus-maze test, and open-field entries test revealed that EA alleviated the depressive-like and anxiety-like behaviours of the stressed rats. Immunohistochemical results showed that proliferative cells (BrdU-positive in the EA group were significantly larger in number compared with the Model group. Further, the results showed that EA significantly promoted the proliferation of amplifying neural progenitors (ANPs and simultaneously inhibited the apoptosis of quiescent neural progenitors (QNPs. In a word, the mechanism underlying the antidepressant-like effects of EA is associated with enhancement of ANPs proliferation and preserving QNPs from apoptosis.

  6. Convolutional Neural Network Based on Extreme Learning Machine for Maritime Ships Recognition in Infrared Images.

    Science.gov (United States)

    Khellal, Atmane; Ma, Hongbin; Fei, Qing

    2018-05-09

    The success of Deep Learning models, notably convolutional neural networks (CNNs), makes them the favorable solution for object recognition systems in both visible and infrared domains. However, the lack of training data in the case of maritime ships research leads to poor performance due to the problem of overfitting. In addition, the back-propagation algorithm used to train CNN is very slow and requires tuning many hyperparameters. To overcome these weaknesses, we introduce a new approach fully based on Extreme Learning Machine (ELM) to learn useful CNN features and perform a fast and accurate classification, which is suitable for infrared-based recognition systems. The proposed approach combines an ELM based learning algorithm to train CNN for discriminative features extraction and an ELM based ensemble for classification. The experimental results on VAIS dataset, which is the largest dataset of maritime ships, confirm that the proposed approach outperforms the state-of-the-art models in term of generalization performance and training speed. For instance, the proposed model is up to 950 times faster than the traditional back-propagation based training of convolutional neural networks, primarily for low-level features extraction.

  7. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  8. Multivariate Cross-Classification: Applying machine learning techniques to characterize abstraction in neural representations

    Directory of Open Access Journals (Sweden)

    Jonas eKaplan

    2015-03-01

    Full Text Available Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC, and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application.

  9. Identification of chaotic systems by neural network with hybrid learning algorithm

    International Nuclear Information System (INIS)

    Pan, S.-T.; Lai, C.-C.

    2008-01-01

    Based on the genetic algorithm (GA) and steepest descent method (SDM), this paper proposes a hybrid algorithm for the learning of neural networks to identify chaotic systems. The systems in question are the logistic map and the Duffing equation. Different identification schemes are used to identify both the logistic map and the Duffing equation, respectively. Simulation results show that our hybrid algorithm is more efficient than that of other methods

  10. A Meta-Analysis Suggests Different Neural Correlates for Implicit and Explicit Learning.

    Science.gov (United States)

    Loonis, Roman F; Brincat, Scott L; Antzoulatos, Evan G; Miller, Earl K

    2017-10-11

    A meta-analysis of non-human primates performing three different tasks (Object-Match, Category-Match, and Category-Saccade associations) revealed signatures of explicit and implicit learning. Performance improved equally following correct and error trials in the Match (explicit) tasks, but it improved more after correct trials in the Saccade (implicit) task, a signature of explicit versus implicit learning. Likewise, error-related negativity, a marker for error processing, was greater in the Match (explicit) tasks. All tasks showed an increase in alpha/beta (10-30 Hz) synchrony after correct choices. However, only the implicit task showed an increase in theta (3-7 Hz) synchrony after correct choices that decreased with learning. In contrast, in the explicit tasks, alpha/beta synchrony increased with learning and decreased thereafter. Our results suggest that explicit versus implicit learning engages different neural mechanisms that rely on different patterns of oscillatory synchrony. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Promoting Collaboration in a Project-Based E-Learning Context

    Science.gov (United States)

    Papanikolaou, Kyparisia; Boubouka, Maria

    2011-01-01

    In this paper we investigate the value of collaboration scripts for promoting metacognitive knowledge in a project-based e-learning context. In an empirical study, 82 students worked individually and in groups on a project using the e-learning environment MyProject, in which the life cycle of a project is inherent. Students followed a particular…

  12. Age-related difference in the effective neural connectivity associated with probabilistic category learning

    International Nuclear Information System (INIS)

    Yoon, Eun Jin; Cho, Sang Soo; Kim, Hee Jung; Bang, Seong Ae; Park, Hyun Soo; Kim, Yu Kyeong; Kim, Sang Eun

    2007-01-01

    Although it is well known that explicit memory is affected by the deleterious changes in brain with aging, but effect of aging in implicit memory such as probabilistic category learning (PCL) is not clear. To identify the effect of aging on the neural interaction for successful PCL, we investigated the neural substrates of PCL and the age-related changes of the neural network between these brain regions. 23 young (age, 252 y; 11 males) and 14 elderly (673 y; 7 males) healthy subjects underwent FDG PET during a resting state and 150-trial weather prediction (WP) task. Correlations between the WP hit rates and regional glucose metabolism were assessed using SPM2 (P diff (37) = 142.47, P<0.005), Systematic comparisons of each path revealed that frontal crosscallosal and the frontal to parahippocampal connection were most responsible for the model differences (P<0.05). For the successful PCL, the elderly recruits the basal ganglia implicit memory system but MTL recruitment differs from the young. The inadequate MTL correlation pattern in the elderly is may be caused by the changes of the neural pathway related with explicit memory. These neural changes can explain the decreased performance of PCL in elderly subjects

  13. Neural circuitry of abdominal pain-related fear learning and reinstatement in irritable bowel syndrome.

    Science.gov (United States)

    Icenhour, A; Langhorst, J; Benson, S; Schlamann, M; Hampel, S; Engler, H; Forsting, M; Elsenbruch, S

    2015-01-01

    Altered pain anticipation likely contributes to disturbed central pain processing in chronic pain conditions like irritable bowel syndrome (IBS), but the learning processes shaping the expectation of pain remain poorly understood. We assessed the neural circuitry mediating the formation, extinction, and reactivation of abdominal pain-related memories in IBS patients compared to healthy controls (HC) in a differential fear conditioning paradigm. During fear acquisition, predictive visual cues (CS(+)) were paired with rectal distensions (US), while control cues (CS(-)) were presented unpaired. During extinction, only CSs were presented. Subsequently, memory reactivation was assessed with a reinstatement procedure involving unexpected USs. Using functional magnetic resonance imaging, group differences in neural activation to CS(+) vs CS(-) were analyzed, along with skin conductance responses (SCR), CS valence, CS-US contingency, state anxiety, salivary cortisol, and alpha-amylase activity. The contribution of anxiety symptoms was addressed in covariance analyses. Fear acquisition was altered in IBS, as indicated by more accurate contingency awareness, greater CS-related valence change, and enhanced CS(+)-induced differential activation of prefrontal cortex and amygdala. IBS patients further revealed enhanced differential cingulate activation during extinction and greater differential hippocampal activation during reinstatement. Anxiety affected neural responses during memory formation and reinstatement. Abdominal pain-related fear learning and memory processes are altered in IBS, mediated by amygdala, cingulate cortex, prefrontal areas, and hippocampus. Enhanced reinstatement may contribute to hypervigilance and central pain amplification, especially in anxious patients. Preventing a 'relapse' of learned fear utilizing extinction-based interventions may be a promising treatment goal in IBS. © 2014 John Wiley & Sons Ltd.

  14. Neural Correlates of Success and Failure Signals During Neurofeedback Learning.

    Science.gov (United States)

    Radua, Joaquim; Stoica, Teodora; Scheinost, Dustin; Pittenger, Christopher; Hampson, Michelle

    2018-05-15

    Feedback-driven learning, observed across phylogeny and of clear adaptive value, is frequently operationalized in simple operant conditioning paradigms, but it can be much more complex, driven by abstract representations of success and failure. This study investigates the neural processes involved in processing success and failure during feedback learning, which are not well understood. Data analyzed were acquired during a multisession neurofeedback experiment in which ten participants were presented with, and instructed to modulate, the activity of their orbitofrontal cortex with the aim of decreasing their anxiety. We assessed the regional blood-oxygenation-level-dependent response to the individualized neurofeedback signals of success and failure across twelve functional runs acquired in two different magnetic resonance sessions in each of ten individuals. Neurofeedback signals of failure correlated early during learning with deactivation in the precuneus/posterior cingulate and neurofeedback signals of success correlated later during learning with deactivation in the medial prefrontal/anterior cingulate cortex. The intensity of the latter deactivations predicted the efficacy of the neurofeedback intervention in the reduction of anxiety. These findings indicate a role for regulation of the default mode network during feedback learning, and suggest a higher sensitivity to signals of failure during the early feedback learning and to signals of success subsequently. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Noise-driven manifestation of learning in mature neural networks

    International Nuclear Information System (INIS)

    Monterola, Christopher; Saloma, Caesar

    2002-01-01

    We show that the generalization capability of a mature thresholding neural network to process above-threshold disturbances in a noise-free environment is extended to subthreshold disturbances by ambient noise without retraining. The ability to benefit from noise is intrinsic and does not have to be learned separately. Nonlinear dependence of sensitivity with noise strength is significantly narrower than in individual threshold systems. Noise has a minimal effect on network performance for above-threshold signals. We resolve two seemingly contradictory responses of trained networks to noise--their ability to benefit from its presence and their robustness against noisy strong disturbances

  16. Promoting Critical Thinking through Service Learning: A Home-Visiting Case Study

    Science.gov (United States)

    Campbell, Cynthia G.; Oswald, Brianna R.

    2018-01-01

    As stated in APA Learning Outcomes 2 and 3, two central goals of higher education instruction are promoting students' critical thinking skills and connecting student learning to real-life applications. To meet these goals, a community-based service-learning experience was designed using task value, interpersonal accountability, cognitive…

  17. Biomimetic Hybrid Feedback Feedforward Neural-Network Learning Control.

    Science.gov (United States)

    Pan, Yongping; Yu, Haoyong

    2017-06-01

    This brief presents a biomimetic hybrid feedback feedforward neural-network learning control (NNLC) strategy inspired by the human motor learning control mechanism for a class of uncertain nonlinear systems. The control structure includes a proportional-derivative controller acting as a feedback servo machine and a radial-basis-function (RBF) NN acting as a feedforward predictive machine. Under the sufficient constraints on control parameters, the closed-loop system achieves semiglobal practical exponential stability, such that an accurate NN approximation is guaranteed in a local region along recurrent reference trajectories. Compared with the existing NNLC methods, the novelties of the proposed method include: 1) the implementation of an adaptive NN control to guarantee plant states being recurrent is not needed, since recurrent reference signals rather than plant states are utilized as NN inputs, which greatly simplifies the analysis and synthesis of the NNLC and 2) the domain of NN approximation can be determined a priori by the given reference signals, which leads to an easy construction of the RBF-NNs. Simulation results have verified the effectiveness of this approach.

  18. Morphine Reward Promotes Cue-Sensitive Learning: Implication of Dorsal Striatal CREB Activity

    Directory of Open Access Journals (Sweden)

    Mathieu Baudonnat

    2017-05-01

    Full Text Available Different parallel neural circuits interact and may even compete to process and store information: whereas stimulus–response (S–R learning critically depends on the dorsal striatum (DS, spatial memory relies on the hippocampus (HPC. Strikingly, despite its potential importance for our understanding of addictive behaviors, the impact of drug rewards on memory systems dynamics has not been extensively studied. Here, we assessed long-term effects of drug- vs food reinforcement on the subsequent use of S–R vs spatial learning strategies and their neural substrates. Mice were trained in a Y-maze cue-guided task, during which either food or morphine injections into the ventral tegmental area (VTA were used as rewards. Although drug- and food-reinforced mice learned the Y-maze task equally well, drug-reinforced mice exhibited a preferential use of an S–R learning strategy when tested in a water-maze competition task designed to dissociate cue-based and spatial learning. This cognitive bias was associated with a persistent increase in the phosphorylated form of cAMP response element-binding protein phosphorylation (pCREB within the DS, and a decrease of pCREB expression in the HPC. Pharmacological inhibition of striatal PKA pathway in drug-rewarded mice limited the morphine-induced increase in levels of pCREB in DS and restored a balanced use of spatial vs cue-based learning. Our findings suggest that drug (opiate reward biases the engagement of separate memory systems toward a predominant use of the cue-dependent system via an increase in learning-related striatal pCREB activity. Persistent functional imbalance between striatal and hippocampal activity could contribute to the persistence of addictive behaviors, or counteract the efficiency of pharmacological or psychotherapeutic treatments.

  19. Hyperexpressed netrin-1promoted neural stem cells migration in mice after focal cerebral ischemia

    OpenAIRE

    Haiyan Lu; Xiaoyan Song; Feng Wang; Guodong Wang; Yuncheng Wu; Qiaoshu Wang; Yongting Wang; Guoyuan Yang; Zhijun Zhang

    2016-01-01

    Endogenous Netrin-1 (NT-1) protein was significantly increased after cerebral ischemia, which may participate in the repair after transient cerebral ischemic injury. In this work, we explored whether NT-1 can be steadily overexpressed by adeno-associated virus (AAV) and the exogenous NT-1 can promote neural stem cells migration from the subventricular zone (SVZ) region after cerebral ischemia. Adult CD-1 mice were injected stereotacticly with AAV carrying NT-1 gene (AAV-NT-1). Mice underwent ...

  20. Dissecting neural pathways for forgetting in Drosophila olfactory aversive memory.

    Science.gov (United States)

    Shuai, Yichun; Hirokawa, Areekul; Ai, Yulian; Zhang, Min; Li, Wanhe; Zhong, Yi

    2015-12-01

    Recent studies have identified molecular pathways driving forgetting and supported the notion that forgetting is a biologically active process. The circuit mechanisms of forgetting, however, remain largely unknown. Here we report two sets of Drosophila neurons that account for the rapid forgetting of early olfactory aversive memory. We show that inactivating these neurons inhibits memory decay without altering learning, whereas activating them promotes forgetting. These neurons, including a cluster of dopaminergic neurons (PAM-β'1) and a pair of glutamatergic neurons (MBON-γ4>γ1γ2), terminate in distinct subdomains in the mushroom body and represent parallel neural pathways for regulating forgetting. Interestingly, although activity of these neurons is required for memory decay over time, they are not required for acute forgetting during reversal learning. Our results thus not only establish the presence of multiple neural pathways for forgetting in Drosophila but also suggest the existence of diverse circuit mechanisms of forgetting in different contexts.

  1. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction.

    Science.gov (United States)

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-04-10

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.

  2. A new backpropagation learning algorithm for layered neural networks with nondifferentiable units.

    Science.gov (United States)

    Oohori, Takahumi; Naganuma, Hidenori; Watanabe, Kazuhisa

    2007-05-01

    We propose a digital version of the backpropagation algorithm (DBP) for three-layered neural networks with nondifferentiable binary units. This approach feeds teacher signals to both the middle and output layers, whereas with a simple perceptron, they are given only to the output layer. The additional teacher signals enable the DBP to update the coupling weights not only between the middle and output layers but also between the input and middle layers. A neural network based on DBP learning is fast and easy to implement in hardware. Simulation results for several linearly nonseparable problems such as XOR demonstrate that the DBP performs favorably when compared to the conventional approaches. Furthermore, in large-scale networks, simulation results indicate that the DBP provides high performance.

  3. managing tertiary institutions for the promotion of lifelong learning

    African Journals Online (AJOL)

    Global Journal

    KEYWORDS: Managing, tertiary institutions, promotion, lifelong learning. INTRODUCTION ... science, medicine and technology towards the ... different environments, whether formal, informal ... schools considering that each day gives birth to.

  4. Introduction to neural networks

    International Nuclear Information System (INIS)

    Pavlopoulos, P.

    1996-01-01

    This lecture is a presentation of today's research in neural computation. Neural computation is inspired by knowledge from neuro-science. It draws its methods in large degree from statistical physics and its potential applications lie mainly in computer science and engineering. Neural networks models are algorithms for cognitive tasks, such as learning and optimization, which are based on concepts derived from research into the nature of the brain. The lecture first gives an historical presentation of neural networks development and interest in performing complex tasks. Then, an exhaustive overview of data management and networks computation methods is given: the supervised learning and the associative memory problem, the capacity of networks, the Perceptron networks, the functional link networks, the Madaline (Multiple Adalines) networks, the back-propagation networks, the reduced coulomb energy (RCE) networks, the unsupervised learning and the competitive learning and vector quantization. An example of application in high energy physics is given with the trigger systems and track recognition system (track parametrization, event selection and particle identification) developed for the CPLEAR experiment detectors from the LEAR at CERN. (J.S.). 56 refs., 20 figs., 1 tab., 1 appendix

  5. Promoting autonomous learning in English through the implementation of Content and Language Integrated Learning (CLIL in science and maths subjects

    Directory of Open Access Journals (Sweden)

    Andriani Putu Fika

    2018-01-01

    Full Text Available Autonomous learning is a concept in which the learner has the ability to take charge of their own learning. It becomes a notable aspect that should be perceived by students. The aim of this research is for finding out the strategies used by grade two teachers in Bali Kiddy Primary School to promote autonomous learning in English through the implementation of Content and Language Integrated Learning in science and maths subjects. This study was designed in the form of descriptive qualitative study. The data were collected through observation, interview, and document study. The result of the study shows that there are some strategies of promoting autonomous learning in English through the implementation of CLIL in Science and Maths subjects. Those strategies are table of content training, questioning & presenting, journal writing, choosing activities, and using online activity. Those strategies can be adopted or even adapted as the way to promote autonomous learning in English subject.

  6. Application of different entropy formalisms in a neural network for novel word learning

    Science.gov (United States)

    Khordad, R.; Rastegar Sedehi, H. R.

    2015-12-01

    In this paper novel word learning in adults is studied. For this goal, four entropy formalisms are employed to include some degree of non-locality in a neural network. The entropy formalisms are Tsallis, Landsberg-Vedral, Kaniadakis, and Abe entropies. First, we have analytically obtained non-extensive cost functions for the all entropies. Then, we have used a generalization of the gradient descent dynamics as a learning rule in a simple perceptron. The Langevin equations are numerically solved and the error function (learning curve) is obtained versus time for different values of the parameters. The influence of index q and number of neuron N on learning is investigated for the all entropies. It is found that learning is a decreasing function of time for the all entropies. The rate of learning for the Landsberg-Vedral entropy is slower than other entropies. The variation of learning with time for the Landsberg-Vedral entropy is not appreciable when the number of neurons increases. It is said that entropy formalism can be used as a means for studying the learning.

  7. PROMOTING AUTONOMOUS LEARNING IN READING CLASS

    Directory of Open Access Journals (Sweden)

    Agus Sholeh

    2015-11-01

    Full Text Available To have good acquisition and awareness in reading, the learners need a long and continuous process, and therefore, they are required to have autonomy in learning reading. This study aims to promote learner autonomy in reading class by combining learner-centered reading teaching and extensive reading teaching. Learner-centered reading teaching was carried out through group discussion, presentation, and language awareness activities. Meanwhile, extensive reading teaching was done to review the learners‘ materials in presentation and reinforce their acquisition. Those two different approaches were applied due to differences on learner's characteristics and needs. The result showed some success in the practice of autonomy, indicated by changes on learners' attitude. However, many learners showed that they focused more on obtaining score than on developing their language acquisition. By implementing the approach, the teacher can assist learners to be aware of their ability to learn independently and equip them with the skill needed for long-life learning.

  8. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Alireza Alemi

    2015-08-01

    Full Text Available Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the

  9. A Neural Network Model to Learn Multiple Tasks under Dynamic Environments

    Science.gov (United States)

    Tsumori, Kenji; Ozawa, Seiichi

    When environments are dynamically changed for agents, the knowledge acquired in an environment might be useless in future. In such dynamic environments, agents should be able to not only acquire new knowledge but also modify old knowledge in learning. However, modifying all knowledge acquired before is not efficient because the knowledge once acquired may be useful again when similar environment reappears and some knowledge can be shared among different environments. To learn efficiently in such environments, we propose a neural network model that consists of the following modules: resource allocating network, long-term & short-term memory, and environment change detector. We evaluate the model under a class of dynamic environments where multiple function approximation tasks are sequentially given. The experimental results demonstrate that the proposed model possesses stable incremental learning, accurate environmental change detection, proper association and recall of old knowledge, and efficient knowledge transfer.

  10. A Tsallis’ statistics based neural network model for novel word learning

    Science.gov (United States)

    Hadzibeganovic, Tarik; Cannas, Sergio A.

    2009-03-01

    We invoke the Tsallis entropy formalism, a nonextensive entropy measure, to include some degree of non-locality in a neural network that is used for simulation of novel word learning in adults. A generalization of the gradient descent dynamics, realized via nonextensive cost functions, is used as a learning rule in a simple perceptron. The model is first investigated for general properties, and then tested against the empirical data, gathered from simple memorization experiments involving two populations of linguistically different subjects. Numerical solutions of the model equations corresponded to the measured performance states of human learners. In particular, we found that the memorization tasks were executed with rather small but population-specific amounts of nonextensivity, quantified by the entropic index q. Our findings raise the possibility of using entropic nonextensivity as a means of characterizing the degree of complexity of learning in both natural and artificial systems.

  11. Superior Generalization Capability of Hardware-Learing Algorithm Developed for Self-Learning Neuron-MOS Neural Networks

    Science.gov (United States)

    Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro

    1995-02-01

    We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.

  12. Promoter2.0: for the recognition of PolII promoter sequences

    DEFF Research Database (Denmark)

    Knudsen, Steen; Knudsen, Steen

    1999-01-01

    Motivation : A new approach to the prediction of eukaryotic PolII promoters from DNA sequence takesadvantage of a combination of elements similar to neural networks and genetic algorithms to recognize a set ofdiscrete subpatterns with variable separation as one pattern: a promoter. The neural...... of optimization, the algorithm was able todiscriminate between vertebrate promoter and non-promoter sequences in a test set with a correlationcoefficient of 0.63. In addition, all five known transcription start sites on the plus strand of the completeadenovirus genome were within 161 bp of 35 predicted...

  13. Sensorimotor Learning: Neurocognitive Mechanisms and Individual Differences.

    Science.gov (United States)

    Seidler, R D; Carson, R G

    2017-07-13

    Here we provide an overview of findings and viewpoints on the mechanisms of sensorimotor learning presented at the 2016 Biomechanics and Neural Control of Movement (BANCOM) conference in Deer Creek, OH. This field has shown substantial growth in the past couple of decades. For example it is now well accepted that neural systems outside of primary motor pathways play a role in learning. Frontoparietal and anterior cingulate networks contribute to sensorimotor adaptation, reflecting strategic aspects of exploration and learning. Longer term training results in functional and morphological changes in primary motor and somatosensory cortices. Interestingly, re-engagement of strategic processes once a skill has become well learned may disrupt performance. Efforts to predict individual differences in learning rate have enhanced our understanding of the neural, behavioral, and genetic factors underlying skilled human performance. Access to genomic analyses has dramatically increased over the past several years. This has enhanced our understanding of cellular processes underlying the expression of human behavior, including involvement of various neurotransmitters, receptors, and enzymes. Surprisingly our field has been slow to adopt such approaches in studying neural control, although this work does require much larger sample sizes than are typically used to investigate skill learning. We advocate that individual differences approaches can lead to new insights into human sensorimotor performance. Moreover, a greater understanding of the factors underlying the wide range of performance capabilities seen across individuals can promote personalized medicine and refinement of rehabilitation strategies, which stand to be more effective than "one size fits all" treatments.

  14. Plasticity-related genes in brain development and amygdala-dependent learning.

    Science.gov (United States)

    Ehrlich, D E; Josselyn, S A

    2016-01-01

    Learning about motivationally important stimuli involves plasticity in the amygdala, a temporal lobe structure. Amygdala-dependent learning involves a growing number of plasticity-related signaling pathways also implicated in brain development, suggesting that learning-related signaling in juveniles may simultaneously influence development. Here, we review the pleiotropic functions in nervous system development and amygdala-dependent learning of a signaling pathway that includes brain-derived neurotrophic factor (BDNF), extracellular signaling-related kinases (ERKs) and cyclic AMP-response element binding protein (CREB). Using these canonical, plasticity-related genes as an example, we discuss the intersection of learning-related and developmental plasticity in the immature amygdala, when aversive and appetitive learning may influence the developmental trajectory of amygdala function. We propose that learning-dependent activation of BDNF, ERK and CREB signaling in the immature amygdala exaggerates and accelerates neural development, promoting amygdala excitability and environmental sensitivity later in life. © 2015 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  15. Statistical learning problem of artificial neural network to control roofing process

    Directory of Open Access Journals (Sweden)

    Lapidus Azariy

    2017-01-01

    Full Text Available Now software developed on the basis of artificial neural networks (ANN has been actively implemented in construction companies to support decision-making in organization and management of construction processes. ANN learning is the main stage of its development. A key question for supervised learning is how many number of training examples we need to approximate the true relationship between network inputs and output with the desired accuracy. Also designing of ANN architecture is related to learning problem known as “curse of dimensionality”. This problem is important for the study of construction process management because of the difficulty to get training data from construction sites. In previous studies the authors have designed a 4-layer feedforward ANN with a unit model of 12-5-4-1 to approximate estimation and prediction of roofing process. This paper presented the statistical learning side of created ANN with simple-error-minimization algorithm. The sample size to efficient training and the confidence interval of network outputs defined. In conclusion the authors predicted successful ANN learning in a large construction business company within a short space of time.

  16. Roles of neural stem cells in the repair of peripheral nerve injury.

    Science.gov (United States)

    Wang, Chong; Lu, Chang-Feng; Peng, Jiang; Hu, Cheng-Dong; Wang, Yu

    2017-12-01

    Currently, researchers are using neural stem cell transplantation to promote regeneration after peripheral nerve injury, as neural stem cells play an important role in peripheral nerve injury repair. This article reviews recent research progress of the role of neural stem cells in the repair of peripheral nerve injury. Neural stem cells can not only differentiate into neurons, astrocytes and oligodendrocytes, but can also differentiate into Schwann-like cells, which promote neurite outgrowth around the injury. Transplanted neural stem cells can differentiate into motor neurons that innervate muscles and promote the recovery of neurological function. To promote the repair of peripheral nerve injury, neural stem cells secrete various neurotrophic factors, including brain-derived neurotrophic factor, fibroblast growth factor, nerve growth factor, insulin-like growth factor and hepatocyte growth factor. In addition, neural stem cells also promote regeneration of the axonal myelin sheath, angiogenesis, and immune regulation. It can be concluded that neural stem cells promote the repair of peripheral nerve injury through a variety of ways.

  17. Habituation in non-neural organisms: evidence from slime moulds

    OpenAIRE

    Boisseau, Romain P.; Vogel, David; Dussutour, Audrey

    2016-01-01

    Learning, defined as a change in behaviour evoked by experience, has hitherto been investigated almost exclusively in multicellular neural organisms. Evidence for learning in non-neural multicellular organisms is scant, and only a few unequivocal reports of learning have been described in single-celled organisms. Here we demonstrate habituation, an unmistakable form of learning, in the non-neural organism Physarum polycephalum. In our experiment, using chemotaxis as the behavioural output and...

  18. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Directory of Open Access Journals (Sweden)

    Yoonsik Shim

    2016-10-01

    Full Text Available We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP. The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  19. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Science.gov (United States)

    Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil

    2016-10-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  20. Proceedings of the workshop cum symposium on applications of neural networks in nuclear science and industry

    International Nuclear Information System (INIS)

    1993-01-01

    The Workshop cum Symposium on Application of Neural Networks in Nuclear Science and Industry was held at Bombay during November 24-26. 1993. The past decade has seen many important advances in the design and technology of artificial neural networks in research and industry. Neural networks is an interdisciplinary field covering a broad spectrum of applications in surveillance, diagnosis of nuclear power plants, nuclear spectroscopy, speech and written text recognition, robotic control, signal processing etc. The objective of the symposium was to promote awareness of advances in neural network research and applications. It was also aimed at conducting the review of the present status and giving direction for future technological developments. Contributed papers have been organized into the following groups: a) neural network architectures, learning algorithms and modelling, b) computer vision and image processing, c) signal processing, d) neural networks and fuzzy systems, e) nuclear applications and f) neural networks and allied applications. Papers relevant to INIS are indexed separately. (M.K.V.)

  1. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  2. An intelligent sales forecasting system through integration of artificial neural networks and fuzzy neural networks with fuzzy weight elimination.

    Science.gov (United States)

    Kuo, R J; Wu, P; Wang, C P

    2002-09-01

    Sales forecasting plays a very prominent role in business strategy. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average (ARMA). However, sales forecasting is very complicated owing to influence by internal and external environments. Recently, artificial neural networks (ANNs) have also been applied in sales forecasting since their promising performances in the areas of control and pattern recognition. However, further improvement is still necessary since unique circumstances, e.g. promotion, cause a sudden change in the sales pattern. Thus, this study utilizes a proposed fuzzy neural network (FNN), which is able to eliminate the unimportant weights, for the sake of learning fuzzy IF-THEN rules obtained from the marketing experts with respect to promotion. The result from FNN is further integrated with the time series data through an ANN. Both the simulated and real-world problem results show that FNN with weight elimination can have lower training error compared with the regular FNN. Besides, real-world problem results also indicate that the proposed estimation system outperforms the conventional statistical method and single ANN in accuracy.

  3. Where's the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network.

    Directory of Open Access Journals (Sweden)

    Christoph Hartmann

    2015-12-01

    Full Text Available Even in the absence of sensory stimulation the brain is spontaneously active. This background "noise" seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN, which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural

  4. Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks

    KAUST Repository

    Umarov, Ramzan; Solovyev, Victor

    2017-01-01

    Accurate computational identification of promoters remains a challenge as these key DNA regulatory regions have variable structures composed of functional motifs that provide gene-specific initiation of transcription. In this paper we utilize

  5. An H(∞) control approach to robust learning of feedforward neural networks.

    Science.gov (United States)

    Jing, Xingjian

    2011-09-01

    A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Toward the Development of an Artificial Brain on a Micropatterned and Material-Regulated Biochip by Guiding and Promoting the Differentiation and Neurite Outgrowth of Neural Stem/Progenitor Cells.

    Science.gov (United States)

    Liu, Yung-Chiang; Lee, I-Chi; Lei, Kin Fong

    2018-02-14

    An in vitro model mimicking the in vivo environment of the brain must be developed to study neural communication and regeneration and to obtain an understanding of cellular and molecular responses. In this work, a multilayered neural network was successfully constructed on a biochip by guiding and promoting neural stem/progenitor cell differentiation and network formation. The biochip consisted of 3 × 3 arrays of cultured wells connected with channels. Neurospheroids were cultured on polyelectrolyte multilayer (PEM) films in the culture wells. Neurite outgrowth and neural differentiation were guided and promoted by the micropatterns and the PEM films. After 5 days in culture, a 3 × 3 neural network was constructed on the biochip. The function and the connections of the network were evaluated by immunocytochemistry and impedance measurements. Neurons were generated and produced functional and recyclable synaptic vesicles. Moreover, the electrical connections of the neural network were confirmed by measuring the impedance across the neurospheroids. The current work facilitates the development of an artificial brain on a chip for investigations of electrical stimulations and recordings of multilayered neural communication and regeneration.

  7. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  8. The Role of Higher Education in Promoting Lifelong Learning. UIL Publication Series on Lifelong Learning Policies and Strategies: No. 3

    Science.gov (United States)

    Yang, Jin, Ed.; Schneller, Chripa, Ed.; Roche, Stephen, Ed.

    2015-01-01

    There is no doubt that universities have a vital role to play in promoting lifelong learning. This publication presents possible ways of expanding and transforming higher education to facilitate lifelong learning in different socio-economic contexts. Nine articles address the various dimensions of the role of higher education in promoting lifelong…

  9. Creating the learning situation to promote student deep learning: Data analysis and application case

    Science.gov (United States)

    Guo, Yuanyuan; Wu, Shaoyan

    2017-05-01

    How to lead students to deeper learning and cultivate engineering innovative talents need to be studied for higher engineering education. In this study, through the survey data analysis and theoretical research, we discuss the correlation of teaching methods, learning motivation, and learning methods. In this research, we find that students have different motivation orientation according to the perception of teaching methods in the process of engineering education, and this affects their choice of learning methods. As a result, creating situations is critical to lead students to deeper learning. Finally, we analyze the process of learning situational creation in the teaching process of «bidding and contract management workshops». In this creation process, teachers use the student-centered teaching to lead students to deeper study. Through the study of influence factors of deep learning process, and building the teaching situation for the purpose of promoting deep learning, this thesis provide a meaningful reference for enhancing students' learning quality, teachers' teaching quality and the quality of innovation talent.

  10. A Closer Look at Deep Learning Neural Networks with Low-level Spectral Periodicity Features

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Kereliuk, Corey; Pikrakis, Aggelos

    2014-01-01

    Systems built using deep learning neural networks trained on low-level spectral periodicity features (DeSPerF) reproduced the most “ground truth” of the systems submitted to the MIREX 2013 task, “Audio Latin Genre Classification.” To answer why this was the case, we take a closer look...

  11. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  12. Using teacher action research to promote constructivist learning ...

    African Journals Online (AJOL)

    Erna Kinsey

    2. To describe the learning environment of typical classrooms in. South African ... a more teacher-centred approach to more constructivist teaching ap- proaches and ... control over their lives within a framework promoted through action research ... cycles of questioning, planning, implementing, collecting data and reflecting ...

  13. Spatially Compact Neural Clusters in the Dorsal Striatum Encode Locomotion Relevant Information.

    Science.gov (United States)

    Barbera, Giovanni; Liang, Bo; Zhang, Lifeng; Gerfen, Charles R; Culurciello, Eugenio; Chen, Rong; Li, Yun; Lin, Da-Ting

    2016-10-05

    An influential striatal model postulates that neural activities in the striatal direct and indirect pathways promote and inhibit movement, respectively. Normal behavior requires coordinated activity in the direct pathway to facilitate intended locomotion and indirect pathway to inhibit unwanted locomotion. In this striatal model, neuronal population activity is assumed to encode locomotion relevant information. Here, we propose a novel encoding mechanism for the dorsal striatum. We identified spatially compact neural clusters in both the direct and indirect pathways. Detailed characterization revealed similar cluster organization between the direct and indirect pathways, and cluster activities from both pathways were correlated with mouse locomotion velocities. Using machine-learning algorithms, cluster activities could be used to decode locomotion relevant behavioral states and locomotion velocity. We propose that neural clusters in the dorsal striatum encode locomotion relevant information and that coordinated activities of direct and indirect pathway neural clusters are required for normal striatal controlled behavior. VIDEO ABSTRACT. Published by Elsevier Inc.

  14. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection.

    Science.gov (United States)

    Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho

    2017-03-01

    Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    Science.gov (United States)

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-05-01

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. The structure of observed learning outcome (SOLO) taxonomy: a model to promote dental students' learning.

    Science.gov (United States)

    Lucander, H; Bondemark, L; Brown, G; Knutsson, K

    2010-08-01

    Selective memorising of isolated facts or reproducing what is thought to be required - the surface approach to learning - is not the desired outcome for a dental student or a dentist in practice. The preferred outcome is a deep approach as defined by an intention to seek understanding, develop expertise and relate information and knowledge into a coherent whole. The aim of this study was to investigate whether the structure of observed learning outcome (SOLO) taxonomy could be used as a model to assist and promote the dental students to develop a deep approach to learning assessed as learning outcomes in a summative assessment. Thirty-two students, participating in course eight in 2007 at the Faculty of Odontology at Malmö University, were introduced to the SOLO taxonomy and constituted the test group. The control group consisted of 35 students participating in course eight in 2006. The effect of the introduction was measured by evaluating responses to a question in the summative assessment by using the SOLO taxonomy. The evaluators consisted of two teachers who performed the assessment of learning outcomes independently and separately on the coded material. The SOLO taxonomy as a model for learning was found to improve the quality of learning. Compared to the control group significantly more strings and structured relations between these strings were present in the test group after the SOLO taxonomy had been introduced (P SOLO taxonomy is recommended as a model for promoting and developing a deeper approach to learning in dentistry.

  17. A neural network model for credit risk evaluation.

    Science.gov (United States)

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  18. Learning Control of Fixed-Wing Unmanned Aerial Vehicles Using Fuzzy Neural Networks

    Directory of Open Access Journals (Sweden)

    Erdal Kayacan

    2017-01-01

    Full Text Available A learning control strategy is preferred for the control and guidance of a fixed-wing unmanned aerial vehicle to deal with lack of modeling and flight uncertainties. For learning the plant model as well as changing working conditions online, a fuzzy neural network (FNN is used in parallel with a conventional P (proportional controller. Among the learning algorithms in the literature, a derivative-free one, sliding mode control (SMC theory-based learning algorithm, is preferred as it has been proved to be computationally efficient in real-time applications. Its proven robustness and finite time converging nature make the learning algorithm appropriate for controlling an unmanned aerial vehicle as the computational power is always limited in unmanned aerial vehicles (UAVs. The parameter update rules and stability conditions of the learning are derived, and the proof of the stability of the learning algorithm is shown by using a candidate Lyapunov function. Intensive simulations are performed to illustrate the applicability of the proposed controller which includes the tracking of a three-dimensional trajectory by the UAV subject to time-varying wind conditions. The simulation results show the efficiency of the proposed control algorithm, especially in real-time control systems because of its computational efficiency.

  19. Shaping Early Reorganization of Neural Networks Promotes Motor Function after Stroke

    Science.gov (United States)

    Volz, L. J.; Rehme, A. K.; Michely, J.; Nettekoven, C.; Eickhoff, S. B.; Fink, G. R.; Grefkes, C.

    2016-01-01

    Neural plasticity is a major factor driving cortical reorganization after stroke. We here tested whether repetitively enhancing motor cortex plasticity by means of intermittent theta-burst stimulation (iTBS) prior to physiotherapy might promote recovery of function early after stroke. Functional magnetic resonance imaging (fMRI) was used to elucidate underlying neural mechanisms. Twenty-six hospitalized, first-ever stroke patients (time since stroke: 1–16 days) with hand motor deficits were enrolled in a sham-controlled design and pseudo-randomized into 2 groups. iTBS was administered prior to physiotherapy on 5 consecutive days either over ipsilesional primary motor cortex (M1-stimulation group) or parieto-occipital vertex (control-stimulation group). Hand motor function, cortical excitability, and resting-state fMRI were assessed 1 day prior to the first stimulation and 1 day after the last stimulation. Recovery of grip strength was significantly stronger in the M1-stimulation compared to the control-stimulation group. Higher levels of motor network connectivity were associated with better motor outcome. Consistently, control-stimulated patients featured a decrease in intra- and interhemispheric connectivity of the motor network, which was absent in the M1-stimulation group. Hence, adding iTBS to prime physiotherapy in recovering stroke patients seems to interfere with motor network degradation, possibly reflecting alleviation of post-stroke diaschisis. PMID:26980614

  20. Smartphones Promote Autonomous Learning in ESL Classrooms

    Directory of Open Access Journals (Sweden)

    Viji Ramamuruthy

    2015-10-01

    Full Text Available The rapid development of high-technology has caused new inventions of gadgets for all walks of life regardless age. In this rapidly advancing technology era many individuals possess hi-tech gadgets such as laptops, tablets, iPad, android phones and smart phones. Adult learners in higher learning institution especially are fond of using smart phones. Students become passive in the classrooms as they are glued to their smart phones. This situation triggers the question of whether learning really takes place while the students are too engaged with their smart phones in the ESL classroom. In this context, the following questions are framed to investigate this issue: What type of learning skills are gained by using smartphones in ESL classrooms? Does smartphone use promote the autonomous learning process? To what extent do learners rely on the lecturers in addition to the usage of smartphones? What are the learning satisfactions gained by ESL learners using smartphones? A total of 70 smartphone users in the age range 18 to 26 years participated in this quantitative study. Questionnaires eliciting demographic details of the respondents, learning skills, learning satisfaction, students' perception on teacher's role in the ESL classroom and autonomous learning were distributed to all the randomly chosen samples. The data were then analyzed by using SPSS version 16. The findings revealed that smartphone use boosted learners’ critical thinking, creative thinking, communication and collaboration skills. In fact, learners gain great satisfaction in the learning process through smartphones. Although learners have moved toward autonomous learning, they are still reliant on the teachers to achieve their learning goals.

  1. Design Of the Approximation Function of a Pedometer based on Artificial Neural Network for the Healthy Life Style Promotion in Diabetic Patients

    OpenAIRE

    Vega Corona, Antonio; Zárate Banda, Magdalena; Barron Adame, Jose Miguel; Martínez Celorio, René Alfredo; Andina de la Fuente, Diego

    2008-01-01

    The present study describes the design of an Artificial Neural Network to synthesize the Approximation Function of a Pedometer for the Healthy Life Style Promotion. Experimentally, the approximation function is synthesized using three basic digital pedometers of low cost, these pedometers were calibrated with an advanced pedometer that calculates calories consumed and computes distance travelled with personal stride input. The synthesized approximation function by means of the designed neural...

  2. Neural coding of basic reward terms of animal learning theory, game theory, microeconomics and behavioural ecology.

    Science.gov (United States)

    Schultz, Wolfram

    2004-04-01

    Neurons in a small number of brain structures detect rewards and reward-predicting stimuli and are active during the expectation of predictable food and liquid rewards. These neurons code the reward information according to basic terms of various behavioural theories that seek to explain reward-directed learning, approach behaviour and decision-making. The involved brain structures include groups of dopamine neurons, the striatum including the nucleus accumbens, the orbitofrontal cortex and the amygdala. The reward information is fed to brain structures involved in decision-making and organisation of behaviour, such as the dorsolateral prefrontal cortex and possibly the parietal cortex. The neural coding of basic reward terms derived from formal theories puts the neurophysiological investigation of reward mechanisms on firm conceptual grounds and provides neural correlates for the function of rewards in learning, approach behaviour and decision-making.

  3. A new avenue to the synthesis of GAG-mimicking polymers highly promoting neural differentiation of embryonic stem cells.

    Science.gov (United States)

    Wang, Mengmeng; Lyu, Zhonglin; Chen, Gaojian; Wang, Hongwei; Yuan, Yuqi; Ding, Kaiguo; Yu, Qian; Yuan, Lin; Chen, Hong

    2015-10-28

    A new strategy for the fabrication of glycosaminoglycan (GAG) analogs was proposed by copolymerizing the sulfonated unit and the glyco unit, 'splitted' from the sulfated saccharide building blocks of GAGs. The synthetic polymers can promote cell proliferation and neural differentiation of embryonic stem cells with the effects even better than those of heparin.

  4. Factors Promoting Vocational Students' Learning at Work: Study on Student Experiences

    Science.gov (United States)

    Virtanen, Anne; Tynjälä, Päivi; Eteläpelto, Anneli

    2014-01-01

    In order to promote effective pedagogical practices for students' work-based learning, we need to understand better how students' learning at work can be supported. This paper examines the factors explaining students' workplace learning (WPL) outcomes, addressing three aspects: (1) student-related individual factors, (2) social and…

  5. Learning Microbiology Through Cooperation: Designing Cooperative Learning Activities that Promote Interdependence, Interaction, and Accountability

    Directory of Open Access Journals (Sweden)

    Janine E. Trempy

    2009-12-01

    Full Text Available A microbiology course and its corresponding learning activities have been structured according to the Cooperative Learning Model. This course, The World According to Microbes, integrates science, math, engineering, and technology (SMET majors and non-SMET majors into teams of students charged with problem solving activities that are microbial in origin. In this study we describe development of learning activities that utilize key components of Cooperative Learning—positive interdependence, promotive interaction, individual accountability, teamwork skills, and group processing. Assessments and evaluations over an 8-year period demonstrate high retention of key concepts in microbiology and high student satisfaction with the course.

  6. A Constrained Multi-Objective Learning Algorithm for Feed-Forward Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    M. Njah

    2017-06-01

    Full Text Available This paper proposes a new approach to address the optimal design of a Feed-forward Neural Network (FNN based classifier. The originality of the proposed methodology, called CMOA, lie in the use of a new constraint handling technique based on a self-adaptive penalty procedure in order to direct the entire search effort towards finding only Pareto optimal solutions that are acceptable. Neurons and connections of the FNN Classifier are dynamically built during the learning process. The approach includes differential evolution to create new individuals and then keeps only the non-dominated ones as the basis for the next generation. The designed FNN Classifier is applied to six binary classification benchmark problems, obtained from the UCI repository, and results indicated the advantages of the proposed approach over other existing multi-objective evolutionary neural networks classifiers reported recently in the literature.

  7. Introduction to Concepts in Artificial Neural Networks

    Science.gov (United States)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  8. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  9. Neural architecture design based on extreme learning machine.

    Science.gov (United States)

    Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis

    2013-12-01

    Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. A Telescopic Binary Learning Machine for Training Neural Networks.

    Science.gov (United States)

    Brunato, Mauro; Battiti, Roberto

    2017-03-01

    This paper proposes a new algorithm based on multiscale stochastic local search with binary representation for training neural networks [binary learning machine (BLM)]. We study the effects of neighborhood evaluation strategies, the effect of the number of bits per weight and that of the maximum weight range used for mapping binary strings to real values. Following this preliminary investigation, we propose a telescopic multiscale version of local search, where the number of bits is increased in an adaptive manner, leading to a faster search and to local minima of better quality. An analysis related to adapting the number of bits in a dynamic way is presented. The control on the number of bits, which happens in a natural manner in the proposed method, is effective to increase the generalization performance. The learning dynamics are discussed and validated on a highly nonlinear artificial problem and on real-world tasks in many application domains; BLM is finally applied to a problem requiring either feedforward or recurrent architectures for feedback control.

  11. Neural oscillatory mechanisms during novel grammar learning underlying language analytical abilities.

    Science.gov (United States)

    Kepinska, Olga; Pereda, Ernesto; Caspers, Johanneke; Schiller, Niels O

    2017-12-01

    The goal of the present study was to investigate the initial phases of novel grammar learning on a neural level, concentrating on mechanisms responsible for individual variability between learners. Two groups of participants, one with high and one with average language analytical abilities, performed an Artificial Grammar Learning (AGL) task consisting of learning and test phases. During the task, EEG signals from 32 cap-mounted electrodes were recorded and epochs corresponding to the learning phases were analysed. We investigated spectral power modulations over time, and functional connectivity patterns by means of a bivariate, frequency-specific index of phase synchronization termed Phase Locking Value (PLV). Behavioural data showed learning effects in both groups, with a steeper learning curve and higher ultimate attainment for the highly skilled learners. Moreover, we established that cortical connectivity patterns and profiles of spectral power modulations over time differentiated L2 learners with various levels of language analytical abilities. Over the course of the task, the learning process seemed to be driven by whole-brain functional connectivity between neuronal assemblies achieved by means of communication in the beta band frequency. On a shorter time-scale, increasing proficiency on the AGL task appeared to be supported by stronger local synchronisation within the right hemisphere regions. Finally, we observed that the highly skilled learners might have exerted less mental effort, or reduced attention for the task at hand once the learning was achieved, as evidenced by the higher alpha band power. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. The neural circuit basis of learning

    Science.gov (United States)

    Patrick, Kaifosh William John

    The astounding capacity for learning ranks among the nervous system's most impressive features. This thesis comprises studies employing varied approaches to improve understanding, at the level of neural circuits, of the brain's capacity for learning. The first part of the thesis contains investigations of hippocampal circuitry -- both theoretical work and experimental work in the mouse Mus musculus -- as a model system for declarative memory. To begin, Chapter 2 presents a theory of hippocampal memory storage and retrieval that reflects nonlinear dendritic processing within hippocampal pyramidal neurons. As a prelude to the experimental work that comprises the remainder of this part, Chapter 3 describes an open source software platform that we have developed for analysis of data acquired with in vivo Ca2+ imaging, the main experimental technique used throughout the remainder of this part of the thesis. As a first application of this technique, Chapter 4 characterizes the content of signaling at synapses between GABAergic neurons of the medial septum and interneurons in stratum oriens of hippocampal area CA1. Chapter 5 then combines these techniques with optogenetic, pharmacogenetic, and pharmacological manipulations to uncover inhibitory circuit mechanisms underlying fear learning. The second part of this thesis focuses on the cerebellum-like electrosensory lobe in the weakly electric mormyrid fish Gnathonemus petersii, as a model system for non-declarative memory. In Chapter 6, we study how short-duration EOD motor commands are recoded into a complex temporal basis in the granule cell layer, which can be used to cancel Purkinje-like cell firing to the longer duration and temporally varying EOD-driven sensory responses. In Chapter 7, we consider not only the temporal aspects of the granule cell code, but also the encoding of body position provided from proprioceptive and efference copy sources. Together these studies clarify how the cerebellum-like circuitry of the

  13. Promoted neuronal differentiation after activation of alpha4/beta2 nicotinic acetylcholine receptors in undifferentiated neural progenitors.

    Directory of Open Access Journals (Sweden)

    Takeshi Takarada

    Full Text Available BACKGROUND: Neural progenitor is a generic term used for undifferentiated cell populations of neural stem, neuronal progenitor and glial progenitor cells with abilities for proliferation and differentiation. We have shown functional expression of ionotropic N-methyl-D-aspartate (NMDA and gamma-aminobutyrate type-A receptors endowed to positively and negatively regulate subsequent neuronal differentiation in undifferentiated neural progenitors, respectively. In this study, we attempted to evaluate the possible functional expression of nicotinic acetylcholine receptor (nAChR by undifferentiated neural progenitors prepared from neocortex of embryonic rodent brains. METHODOLOGY/PRINCIPAL FINDINGS: Reverse transcription polymerase chain reaction analysis revealed mRNA expression of particular nAChR subunits in undifferentiated rat and mouse progenitors prepared before and after the culture with epidermal growth factor under floating conditions. Sustained exposure to nicotine significantly inhibited the formation of neurospheres composed of clustered proliferating cells and 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide reduction activity at a concentration range of 1 µM to 1 mM without affecting cell survival. In these rodent progenitors previously exposed to nicotine, marked promotion was invariably seen for subsequent differentiation into cells immunoreactive for a neuronal marker protein following the culture of dispersed cells under adherent conditions. Both effects of nicotine were significantly prevented by the heteromeric α4β2 nAChR subtype antagonists dihydro-β-erythroidine and 4-(5-ethoxy-3-pyridinyl-N-methyl-(3E-3-buten-1-amine, but not by the homomeric α7 nAChR subtype antagonist methyllycaconitine, in murine progenitors. Sustained exposure to nicotine preferentially increased the expression of Math1 among different basic helix-loop-helix proneural genes examined. In undifferentiated progenitors from embryonic mice

  14. Promoting Constructive Activities that Support Vicarious Learning during Computer-Based Instruction

    Science.gov (United States)

    Gholson, Barry; Craig, Scotty D.

    2006-01-01

    This article explores several ways computer-based instruction can be designed to support constructive activities and promote deep-level comprehension during vicarious learning. Vicarious learning, discussed in the first section, refers to knowledge acquisition under conditions in which the learner is not the addressee and does not physically…

  15. Robust sequential learning of feedforward neural networks in the presence of heavy-tailed noise.

    Science.gov (United States)

    Vuković, Najdan; Miljković, Zoran

    2015-03-01

    Feedforward neural networks (FFNN) are among the most used neural networks for modeling of various nonlinear problems in engineering. In sequential and especially real time processing all neural networks models fail when faced with outliers. Outliers are found across a wide range of engineering problems. Recent research results in the field have shown that to avoid overfitting or divergence of the model, new approach is needed especially if FFNN is to run sequentially or in real time. To accommodate limitations of FFNN when training data contains a certain number of outliers, this paper presents new learning algorithm based on improvement of conventional extended Kalman filter (EKF). Extended Kalman filter robust to outliers (EKF-OR) is probabilistic generative model in which measurement noise covariance is not constant; the sequence of noise measurement covariance is modeled as stochastic process over the set of symmetric positive-definite matrices in which prior is modeled as inverse Wishart distribution. In each iteration EKF-OR simultaneously estimates noise estimates and current best estimate of FFNN parameters. Bayesian framework enables one to mathematically derive expressions, while analytical intractability of the Bayes' update step is solved by using structured variational approximation. All mathematical expressions in the paper are derived using the first principles. Extensive experimental study shows that FFNN trained with developed learning algorithm, achieves low prediction error and good generalization quality regardless of outliers' presence in training data. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Social Interaction Affects Neural Outcomes of Sign Language Learning As a Foreign Language in Adults.

    Science.gov (United States)

    Yusa, Noriaki; Kim, Jungho; Koizumi, Masatoshi; Sugiura, Motoaki; Kawashima, Ryuta

    2017-01-01

    Children naturally acquire a language in social contexts where they interact with their caregivers. Indeed, research shows that social interaction facilitates lexical and phonological development at the early stages of child language acquisition. It is not clear, however, whether the relationship between social interaction and learning applies to adult second language acquisition of syntactic rules. Does learning second language syntactic rules through social interactions with a native speaker or without such interactions impact behavior and the brain? The current study aims to answer this question. Adult Japanese participants learned a new foreign language, Japanese sign language (JSL), either through a native deaf signer or via DVDs. Neural correlates of acquiring new linguistic knowledge were investigated using functional magnetic resonance imaging (fMRI). The participants in each group were indistinguishable in terms of their behavioral data after the instruction. The fMRI data, however, revealed significant differences in the neural activities between two groups. Significant activations in the left inferior frontal gyrus (IFG) were found for the participants who learned JSL through interactions with the native signer. In contrast, no cortical activation change in the left IFG was found for the group who experienced the same visual input for the same duration via the DVD presentation. Given that the left IFG is involved in the syntactic processing of language, spoken or signed, learning through social interactions resulted in an fMRI signature typical of native speakers: activation of the left IFG. Thus, broadly speaking, availability of communicative interaction is necessary for second language acquisition and this results in observed changes in the brain.

  17. Hyperresponsiveness of the Neural Fear Network During Fear Conditioning and Extinction Learning in Male Cocaine Users

    NARCIS (Netherlands)

    Kaag, A.M.; Levar, N.; Woutersen, K.; Homberg, J.R.; Brink, W. van den; Reneman, L.; Wingen, G. van

    2016-01-01

    OBJECTIVE: The authors investigated whether cocaine use disorder is associated with abnormalities in the neural underpinnings of aversive conditioning and extinction learning, as these processes may play an important role in the development and persistence of drug abuse. METHOD: Forty male regular

  18. Neural correlates of reward-based spatial learning in persons with cocaine dependence.

    Science.gov (United States)

    Tau, Gregory Z; Marsh, Rachel; Wang, Zhishun; Torres-Sanchez, Tania; Graniello, Barbara; Hao, Xuejun; Xu, Dongrong; Packard, Mark G; Duan, Yunsuo; Kangarlu, Alayar; Martinez, Diana; Peterson, Bradley S

    2014-02-01

    Dysfunctional learning systems are thought to be central to the pathogenesis of and impair recovery from addictions. The functioning of the brain circuits for episodic memory or learning that support goal-directed behavior has not been studied previously in persons with cocaine dependence (CD). Thirteen abstinent CD and 13 healthy participants underwent MRI scanning while performing a task that requires the use of spatial cues to navigate a virtual-reality environment and find monetary rewards, allowing the functional assessment of the brain systems for spatial learning, a form of episodic memory. Whereas both groups performed similarly on the reward-based spatial learning task, we identified disturbances in brain regions involved in learning and reward in CD participants. In particular, CD was associated with impaired functioning of medial temporal lobe (MTL), a brain region that is crucial for spatial learning (and episodic memory) with concomitant recruitment of striatum (which normally participates in stimulus-response, or habit, learning), and prefrontal cortex. CD was also associated with enhanced sensitivity of the ventral striatum to unexpected rewards but not to expected rewards earned during spatial learning. We provide evidence that spatial learning in CD is characterized by disturbances in functioning of an MTL-based system for episodic memory and a striatum-based system for stimulus-response learning and reward. We have found additional abnormalities in distributed cortical regions. Consistent with findings from animal studies, we provide the first evidence in humans describing the disruptive effects of cocaine on the coordinated functioning of multiple neural systems for learning and memory.

  19. Cocaine self-administration abolishes associative neural encoding in the nucleus accumbens necessary for higher-order learning.

    Science.gov (United States)

    Saddoris, Michael P; Carelli, Regina M

    2014-01-15

    Cocaine use is often associated with diminished cognitive function, persisting even after abstinence from the drug. Likely targets for these changes are the core and shell of the nucleus accumbens (NAc), which are critical for mediating the rewarding aspects of drugs of abuse as well as supporting associative learning. To understand this deficit, we recorded neural activity in the NAc of rats with a history of cocaine self-administration or control subjects while they learned Pavlovian first- and second-order associations. Rats were trained for 2 weeks to self-administer intravenous cocaine or water. Later, rats learned a first-order Pavlovian discrimination where a conditioned stimulus (CS)+ predicted food, and a control (CS-) did not. Rats then learned a second-order association where, absent any food reinforcement, a novel cued (SOC+) predicted the CS+ and another (SOC-) predicted the CS-. Electrophysiological recordings were taken during performance of these tasks in the NAc core and shell. Both control subjects and cocaine-experienced rats learned the first-order association, but only control subjects learned the second-order association. Neural recordings indicated that core and shell neurons encoded task-relevant information that correlated with behavioral performance, whereas this type of encoding was abolished in cocaine-experienced rats. The NAc core and shell perform complementary roles in supporting normal associative learning, functions that are impaired after cocaine experience. This impoverished encoding of motivational behavior, even after abstinence from the drug, might provide a key mechanism to understand why addiction remains a chronically relapsing disorder despite repeated attempts at sobriety. Copyright © 2014 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  20. Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances

    International Nuclear Information System (INIS)

    Fiete, Ila R.; Seung, H. Sebastian

    2006-01-01

    We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of 'empiric' synapses driven by random spike trains from an external source

  1. Classroom Environments That Promote Learning from the Perspective of School Children

    Directory of Open Access Journals (Sweden)

    Marianella Castro-Pérez

    2015-09-01

    Full Text Available The following paper is based on a research41 made on school environments that promote learning in children. Its objective was “to determine the physical and socio-emotional factors of school environments that promote learning.” To this end, the investigation had both an exploratory and descriptive approach in terms of the various physical and emotional elements that influence the classroom environment and, therefore, the learning process. In this paper, reference is made only to the data provided by the child population. Such group was comprised of 307 boys and girls of public schools from six provinces in the country, intentionally selected through coordination and negotiation with the authorities of schools that agreed to participate. The data collection instruments used were two questionnaires with closed and open questions, an anecdotal record, and a guide on which the observation technique was performed. The analysis of the information derived from the technique and instruments used was developed by complementing quantitative data with qualitative data. Emerging categories were created to interpret the latter. The information provided by the boys and girls will hopefully serve as input to raise awareness among universities, authorities and teachers about the imperative need for school environments that are aesthetic, pleasant, motivating, comfortable, clean and promote the emotional stability every human being requires for the learning process to be successful.

  2. Olfactory bulb encoding during learning under anaesthesia

    Directory of Open Access Journals (Sweden)

    Alister U Nicol

    2014-06-01

    Full Text Available Neural plasticity changes within the olfactory bulb are important for olfactory learning, although how neural encoding changes support new associations with specific odours and whether they can be investigated under anaesthesia, remain unclear. Using the social transmission of food preference olfactory learning paradigm in mice in conjunction with in vivo microdialysis sampling we have shown firstly that a learned preference for a scented food odour smelled on the breath of a demonstrator animal occurs under isofluorane anaesthesia. Furthermore, subsequent exposure to this cued odour under anaesthesia promotes the same pattern of increased release of glutamate and GABA in the olfactory bulb as previously found in conscious animals following olfactory learning, and evoked GABA release was positively correlated with the amount of scented food eaten. In a second experiment, multiarray (24 electrodes electrophysiological recordings were made from olfactory bulb mitral cells under isofluorane anaesthesia before, during and after a novel scented food odour was paired with carbon disulfide. Results showed significant increases in overall firing frequency to the cued-odour during and after learning and decreases in response to an uncued odour. Analysis of patterns of changes in individual neurons revealed that a substantial proportion (>50% of them significantly changed their response profiles during and after learning with most of those previously inhibited becoming excited. A large number of cells exhibiting no response to the odours prior to learning were either excited or inhibited afterwards. With the uncued odour many previously responsive cells became unresponsive or inhibited. Learning associated changes only occurred in the posterior part of the olfactory bulb. Thus olfactory learning under anaesthesia promotes extensive, but spatially distinct, changes in mitral cell networks to both cued and uncued odours as well as in evoked glutamate and

  3. Using Facebook to Promote Learning: A Case Study

    Science.gov (United States)

    Schoper, Sarah E.; Hill, Aaron R.

    2017-01-01

    A growing body of research is examining the use of social media on college campuses. This study explores the use of one social media outlet, specifically Facebook's closed group feature, in two graduate courses. Findings show that using Facebook can promote student learning. Students used the groups for sharing ideas and support, asking questions,…

  4. Neural networks involved in learning lexical-semantic and syntactic information in a second language.

    Science.gov (United States)

    Mueller, Jutta L; Rueschemeyer, Shirley-Ann; Ono, Kentaro; Sugiura, Motoaki; Sadato, Norihiro; Nakamura, Akinori

    2014-01-01

    The present study used functional magnetic resonance imaging (fMRI) to investigate the neural correlates of language acquisition in a realistic learning environment. Japanese native speakers were trained in a miniature version of German prior to fMRI scanning. During scanning they listened to (1) familiar sentences, (2) sentences including a novel sentence structure, and (3) sentences containing a novel word while visual context provided referential information. Learning-related decreases of brain activation over time were found in a mainly left-hemispheric network comprising classical frontal and temporal language areas as well as parietal and subcortical regions and were largely overlapping for novel words and the novel sentence structure in initial stages of learning. Differences occurred at later stages of learning during which content-specific activation patterns in prefrontal, parietal and temporal cortices emerged. The results are taken as evidence for a domain-general network supporting the initial stages of language learning which dynamically adapts as learners become proficient.

  5. A Model to Explain the Emergence of Reward Expectancy neurons using Reinforcement Learning and Neural Network

    OpenAIRE

    Shinya, Ishii; Munetaka, Shidara; Katsunari, Shibata

    2006-01-01

    In an experiment of multi-trial task to obtain a reward, reward expectancy neurons,###which responded only in the non-reward trials that are necessary to advance###toward the reward, have been observed in the anterior cingulate cortex of monkeys.###In this paper, to explain the emergence of the reward expectancy neuron in###terms of reinforcement learning theory, a model that consists of a recurrent neural###network trained based on reinforcement learning is proposed. The analysis of the###hi...

  6. A Neural Circuit for Acoustic Navigation combining Heterosynaptic and Non-synaptic Plasticity that learns Stable Trajectories

    DEFF Research Database (Denmark)

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    controllers be resolved in a manner that generates consistent and stable robot trajectories? We propose a neural circuit that minimises this conflict by learning sensorimotor mappings as neuronal transfer functions between the perceived sound direction and wheel velocities of a simulated non-holonomic mobile...

  7. Machine learning of radial basis function neural network based on Kalman filter: Introduction

    Directory of Open Access Journals (Sweden)

    Vuković Najdan L.

    2014-01-01

    Full Text Available This paper analyzes machine learning of radial basis function neural network based on Kalman filtering. Three algorithms are derived: linearized Kalman filter, linearized information filter and unscented Kalman filter. We emphasize basic properties of these estimation algorithms, demonstrate how their advantages can be used for optimization of network parameters, derive mathematical models and show how they can be applied to model problems in engineering practice.

  8. Improve 3D laser scanner measurements accuracy using a FFBP neural network with Widrow-Hoff weight/bias learning function

    Science.gov (United States)

    Rodríguez-Quiñonez, J. C.; Sergiyenko, O.; Hernandez-Balbuena, D.; Rivas-Lopez, M.; Flores-Fuentes, W.; Basaca-Preciado, L. C.

    2014-12-01

    Many laser scanners depend on their mechanical construction to guarantee their measurements accuracy, however, the current computational technologies allow us to improve these measurements by mathematical methods implemented in neural networks. In this article we are going to introduce the current laser scanner technologies, give a description of our 3D laser scanner and adjust their measurement error by a previously trained feed forward back propagation (FFBP) neural network with a Widrow-Hoff weight/bias learning function. A comparative analysis with other learning functions such as the Kohonen algorithm and gradient descendent with momentum algorithm is presented. Finally, computational simulations are conducted to verify the performance and method uncertainty in the proposed system.

  9. Polysaccharides from Ganoderma lucidum Promote Cognitive Function and Neural Progenitor Proliferation in Mouse Model of Alzheimer's Disease.

    Science.gov (United States)

    Huang, Shichao; Mao, Jianxin; Ding, Kan; Zhou, Yue; Zeng, Xianglu; Yang, Wenjuan; Wang, Peipei; Zhao, Cun; Yao, Jian; Xia, Peng; Pei, Gang

    2017-01-10

    Promoting neurogenesis is a promising strategy for the treatment of cognition impairment associated with Alzheimer's disease (AD). Ganoderma lucidum is a revered medicinal mushroom for health-promoting benefits in the Orient. Here, we found that oral administration of the polysaccharides and water extract from G. lucidum promoted neural progenitor cell (NPC) proliferation to enhance neurogenesis and alleviated cognitive deficits in transgenic AD mice. G. lucidum polysaccharides (GLP) also promoted self-renewal of NPC in cell culture. Further mechanistic study revealed that GLP potentiated activation of fibroblast growth factor receptor 1 (FGFR1) and downstream extracellular signal-regulated kinase (ERK) and AKT cascades. Consistently, inhibition of FGFR1 effectively blocked the GLP-promoted NPC proliferation and activation of the downstream cascades. Our findings suggest that GLP could serve as a regenerative therapeutic agent for the treatment of cognitive decline associated with neurodegenerative diseases. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks.

    Science.gov (United States)

    Huynh, Benjamin Q; Li, Hui; Giger, Maryellen L

    2016-07-01

    Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text

  11. The neural basis of implicit learning and memory: a review of neuropsychological and neuroimaging research.

    Science.gov (United States)

    Reber, Paul J

    2013-08-01

    Memory systems research has typically described the different types of long-term memory in the brain as either declarative versus non-declarative or implicit versus explicit. These descriptions reflect the difference between declarative, conscious, and explicit memory that is dependent on the medial temporal lobe (MTL) memory system, and all other expressions of learning and memory. The other type of memory is generally defined by an absence: either the lack of dependence on the MTL memory system (nondeclarative) or the lack of conscious awareness of the information acquired (implicit). However, definition by absence is inherently underspecified and leaves open questions of how this type of memory operates, its neural basis, and how it differs from explicit, declarative memory. Drawing on a variety of studies of implicit learning that have attempted to identify the neural correlates of implicit learning using functional neuroimaging and neuropsychology, a theory of implicit memory is presented that describes it as a form of general plasticity within processing networks that adaptively improve function via experience. Under this model, implicit memory will not appear as a single, coherent, alternative memory system but will instead be manifested as a principle of improvement from experience based on widespread mechanisms of cortical plasticity. The implications of this characterization for understanding the role of implicit learning in complex cognitive processes and the effects of interactions between types of memory will be discussed for examples within and outside the psychology laboratory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Simplification of neural network model for predicting local power distributions of BWR fuel bundle using learning algorithm with forgetting

    International Nuclear Information System (INIS)

    Tanabe, Akira; Yamamoto, Toru; Shinfuku, Kimihiro; Nakamae, Takuji; Nishide, Fusayo.

    1995-01-01

    Previously a two-layered neural network model was developed to predict the relation between fissile enrichment of each fuel rod and local power distribution in a BWR fuel bundle. This model was obtained intuitively based on 33 patterns of training signals after an intensive survey of the models. Recently, a learning algorithm with forgetting was reported to simplify neural network models. It is an interesting subject what kind of model will be obtained if this algorithm is applied to the complex three-layered model which learns the same training signals. A three-layered model which is expanded to have direct connections between the 1st and the 3rd layer elements has been constructed and the learning method of normal back propagation was applied first to this model. The forgetting algorithm was then added to this learning process. The connections concerned with the 2nd layer elements disappeared and the 2nd layer has become unnecessary. It took a longer computing time by an order to learn the same training signals than the simple back propagation, but the two-layered model was obtained autonomously from the expanded three-layered model. (author)

  13. A design-based approach with vocational teachers to promote self-regulated learning

    NARCIS (Netherlands)

    Jossberger, Helen; Brand-Gruwel, Saskia; Van de Wiel, Margje; Boshuizen, Els

    2011-01-01

    Jossberger, H., Brand-Gruwel, S., Van de Wiel, M., & Boshuizen, H. P. A. (2011, August). A design-based approach with vocational teachers to promote self-regulated learning. Presentation at the 14th European Conference for Research on Learning and Instruction (EARLI), Exeter, England.

  14. Towards a lifelong learning society through reading promotion: Opportunities and challenges for libraries and community learning centres in Viet Nam

    Science.gov (United States)

    Hossain, Zakir

    2016-04-01

    The government of Viet Nam has made a commitment to build a Lifelong Learning Society by 2020. A range of related initiatives have been launched, including the Southeast Asian Ministers of Education Organization Centre for Lifelong Learning (SEAMEO CELLL) and "Book Day" - a day aimed at encouraging reading and raising awareness of its importance for the development of knowledge and skills. Viet Nam also aims to implement lifelong learning (LLL) activities in libraries, museums, cultural centres and clubs. The government of Viet Nam currently operates more than 11,900 Community Learning Centres (CLCs) and is in the process of both renovating and innovating public libraries and museums throughout the country. In addition to the work undertaken by the Viet Nam government, a number of enterprises have been initiated by non-governmental organisations and non-profit organisations to promote literacy and lifelong learning. This paper investigates some government initiatives focused on libraries and CLCs and their impact on reading promotion. Proposing a way forward, the paper confirms that Viet Nam's libraries and CLCs play an essential role in promoting reading and building a LLL Society.

  15. Individual differences in sensitivity to reward and punishment and neural activity during reward and avoidance learning.

    Science.gov (United States)

    Kim, Sang Hee; Yoon, HeungSik; Kim, Hackjin; Hamann, Stephan

    2015-09-01

    In this functional neuroimaging study, we investigated neural activations during the process of learning to gain monetary rewards and to avoid monetary loss, and how these activations are modulated by individual differences in reward and punishment sensitivity. Healthy young volunteers performed a reinforcement learning task where they chose one of two fractal stimuli associated with monetary gain (reward trials) or avoidance of monetary loss (avoidance trials). Trait sensitivity to reward and punishment was assessed using the behavioral inhibition/activation scales (BIS/BAS). Functional neuroimaging results showed activation of the striatum during the anticipation and reception periods of reward trials. During avoidance trials, activation of the dorsal striatum and prefrontal regions was found. As expected, individual differences in reward sensitivity were positively associated with activation in the left and right ventral striatum during reward reception. Individual differences in sensitivity to punishment were negatively associated with activation in the left dorsal striatum during avoidance anticipation and also with activation in the right lateral orbitofrontal cortex during receiving monetary loss. These results suggest that learning to attain reward and learning to avoid loss are dependent on separable sets of neural regions whose activity is modulated by trait sensitivity to reward or punishment. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  16. Recurrent fuzzy neural network by using feedback error learning approaches for LFC in interconnected power system

    International Nuclear Information System (INIS)

    Sabahi, Kamel; Teshnehlab, Mohammad; Shoorhedeli, Mahdi Aliyari

    2009-01-01

    In this study, a new adaptive controller based on modified feedback error learning (FEL) approaches is proposed for load frequency control (LFC) problem. The FEL strategy consists of intelligent and conventional controllers in feedforward and feedback paths, respectively. In this strategy, a conventional feedback controller (CFC), i.e. proportional, integral and derivative (PID) controller, is essential to guarantee global asymptotic stability of the overall system; and an intelligent feedforward controller (INFC) is adopted to learn the inverse of the controlled system. Therefore, when the INFC learns the inverse of controlled system, the tracking of reference signal is done properly. Generally, the CFC is designed at nominal operating conditions of the system and, therefore, fails to provide the best control performance as well as global stability over a wide range of changes in the operating conditions of the system. So, in this study a supervised controller (SC), a lookup table based controller, is addressed for tuning of the CFC. During abrupt changes of the power system parameters, the SC adjusts the PID parameters according to these operating conditions. Moreover, for improving the performance of overall system, a recurrent fuzzy neural network (RFNN) is adopted in INFC instead of the conventional neural network, which was used in past studies. The proposed FEL controller has been compared with the conventional feedback error learning controller (CFEL) and the PID controller through some performance indices

  17. Oscillations, neural computations and learning during wake and sleep.

    Science.gov (United States)

    Penagos, Hector; Varela, Carmen; Wilson, Matthew A

    2017-06-01

    Learning and memory theories consider sleep and the reactivation of waking hippocampal neural patterns to be crucial for the long-term consolidation of memories. Here we propose that precisely coordinated representations across brain regions allow the inference and evaluation of causal relationships to train an internal generative model of the world. This training starts during wakefulness and strongly benefits from sleep because its recurring nested oscillations may reflect compositional operations that facilitate a hierarchical processing of information, potentially including behavioral policy evaluations. This suggests that an important function of sleep activity is to provide conditions conducive to general inference, prediction and insight, which contribute to a more robust internal model that underlies generalization and adaptive behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Learning and retrieval behavior in recurrent neural networks with pre-synaptic dependent homeostatic plasticity

    Science.gov (United States)

    Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.

    2017-08-01

    The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.

  19. Drive Control Scheme of Electric Power Assisted Wheelchair Based on Neural Network Learning of Human Wheelchair Operation Characteristics

    Science.gov (United States)

    Tanohata, Naoki; Seki, Hirokazu

    This paper describes a novel drive control scheme of electric power assisted wheelchairs based on neural network learning of human wheelchair operation characteristics. “Electric power assisted wheelchair” which enhances the drive force of the operator by employing electric motors is expected to be widely used as a mobility support system for elderly and disabled people. However, some handicapped people with paralysis of the muscles of one side of the body cannot maneuver the wheelchair as desired because of the difference in the right and left input force. Therefore, this study proposes a neural network learning system of such human wheelchair operation characteristics and a drive control scheme with variable distribution and assistance ratios. Some driving experiments will be performed to confirm the effectiveness of the proposed control system.

  20. THE DISTANCE EDUCATION TO PROMOTE CONTINUOUS LEARNING OF HEALTH PROFESSIONALS: REVIEW

    Directory of Open Access Journals (Sweden)

    Lívia Lima Ferraz

    2013-03-01

    Full Text Available The results of many articles and researches showed thatemploymenthavean importantrolefor continuous learning. The main factors that made possible this continuous education were:the technology information advanced and distance education flexibilities.Theevolutionofon-line continuing education helps the health care professionals development manyfundamental learning skillsas self-assessment and self-criticism.Therefore, this articlesobjective is to identify howpublic policiescould promote continuous learning of healthprofessionals through distance education(DEand the contributions of this education formatfor transformationhealth activities.In conclusion, the results were that distance education(DE was an important strategy for permanent education, because(DEdevelopments goodskills of learning and breaksterritories barriers. Wherefore, distance education became aneffective learning format

  1. Adolescent-specific patterns of behavior and neural activity during social reinforcement learning.

    Science.gov (United States)

    Jones, Rebecca M; Somerville, Leah H; Li, Jian; Ruberry, Erika J; Powers, Alisa; Mehta, Natasha; Dyke, Jonathan; Casey, B J

    2014-06-01

    Humans are sophisticated social beings. Social cues from others are exceptionally salient, particularly during adolescence. Understanding how adolescents interpret and learn from variable social signals can provide insight into the observed shift in social sensitivity during this period. The present study tested 120 participants between the ages of 8 and 25 years on a social reinforcement learning task where the probability of receiving positive social feedback was parametrically manipulated. Seventy-eight of these participants completed the task during fMRI scanning. Modeling trial-by-trial learning, children and adults showed higher positive learning rates than did adolescents, suggesting that adolescents demonstrated less differentiation in their reaction times for peers who provided more positive feedback. Forming expectations about receiving positive social reinforcement correlated with neural activity within the medial prefrontal cortex and ventral striatum across age. Adolescents, unlike children and adults, showed greater insular activity during positive prediction error learning and increased activity in the supplementary motor cortex and the putamen when receiving positive social feedback regardless of the expected outcome, suggesting that peer approval may motivate adolescents toward action. While different amounts of positive social reinforcement enhanced learning in children and adults, all positive social reinforcement equally motivated adolescents. Together, these findings indicate that sensitivity to peer approval during adolescence goes beyond simple reinforcement theory accounts and suggest possible explanations for how peers may motivate adolescent behavior.

  2. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images.

    Science.gov (United States)

    Cheng, Phillip M; Malhi, Harshawn S

    2017-04-01

    The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

  3. Modeling a Neural Network as a Teaching Tool for the Learning of the Structure-Function Relationship

    Science.gov (United States)

    Salinas, Dino G.; Acevedo, Cristian; Gomez, Christian R.

    2010-01-01

    The authors describe an activity they have created in which students can visualize a theoretical neural network whose states evolve according to a well-known simple law. This activity provided an uncomplicated approach to a paradigm commonly represented through complex mathematical formulation. From their observations, students learned many basic…

  4. Sharpened cortical tuning and enhanced cortico-cortical communication contribute to the long-term neural mechanisms of visual motion perceptual learning.

    Science.gov (United States)

    Chen, Nihong; Bi, Taiyong; Zhou, Tiangang; Li, Sheng; Liu, Zili; Fang, Fang

    2015-07-15

    Much has been debated about whether the neural plasticity mediating perceptual learning takes place at the sensory or decision-making stage in the brain. To investigate this, we trained human subjects in a visual motion direction discrimination task. Behavioral performance and BOLD signals were measured before, immediately after, and two weeks after training. Parallel to subjects' long-lasting behavioral improvement, the neural selectivity in V3A and the effective connectivity from V3A to IPS (intraparietal sulcus, a motion decision-making area) exhibited a persistent increase for the trained direction. Moreover, the improvement was well explained by a linear combination of the selectivity and connectivity increases. These findings suggest that the long-term neural mechanisms of motion perceptual learning are implemented by sharpening cortical tuning to trained stimuli at the sensory processing stage, as well as by optimizing the connections between sensory and decision-making areas in the brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Promotion of self-directed learning using virtual patient cases.

    Science.gov (United States)

    Benedict, Neal; Schonder, Kristine; McGee, James

    2013-09-12

    To assess the effectiveness of virtual patient cases to promote self-directed learning (SDL) in a required advanced therapeutics course. Virtual patient software based on a branched-narrative decision-making model was used to create complex patient case simulations to replace lecture-based instruction. Within each simulation, students used SDL principles to learn course objectives, apply their knowledge through clinical recommendations, and assess their progress through patient outcomes and faculty feedback linked to their individual decisions. Group discussions followed each virtual patient case to provide further interpretation, clarification, and clinical perspective. Students found the simulated patient cases to be organized (90%), enjoyable (82%), intellectually challenging (97%), and valuable to their understanding of course content (91%). Students further indicated that completion of the virtual patient cases prior to class permitted better use of class time (78%) and promoted SDL (84%). When assessment questions regarding material on postoperative nausea and vomiting were compared, no difference in scores were found between the students who attended the lecture on the material in 2011 (control group) and those who completed the virtual patient case on the material in 2012 (intervention group). Completion of virtual patient cases, designed to replace lectures and promote SDL, was overwhelmingly supported by students and proved to be as effective as traditional teaching methods.

  6. Radial basis function neural networks with sequential learning MRAN and its applications

    CERN Document Server

    Sundararajan, N; Wei Lu Ying

    1999-01-01

    This book presents in detail the newly developed sequential learning algorithm for radial basis function neural networks, which realizes a minimal network. This algorithm, created by the authors, is referred to as Minimal Resource Allocation Networks (MRAN). The book describes the application of MRAN in different areas, including pattern recognition, time series prediction, system identification, control, communication and signal processing. Benchmark problems from these areas have been studied, and MRAN is compared with other algorithms. In order to make the book self-contained, a review of t

  7. Cooperative learning neural network output feedback control of uncertain nonlinear multi-agent systems under directed topologies

    Science.gov (United States)

    Wang, W.; Wang, D.; Peng, Z. H.

    2017-09-01

    Without assuming that the communication topologies among the neural network (NN) weights are to be undirected and the states of each agent are measurable, the cooperative learning NN output feedback control is addressed for uncertain nonlinear multi-agent systems with identical structures in strict-feedback form. By establishing directed communication topologies among NN weights to share their learned knowledge, NNs with cooperative learning laws are employed to identify the uncertainties. By designing NN-based κ-filter observers to estimate the unmeasurable states, a new cooperative learning output feedback control scheme is proposed to guarantee that the system outputs can track nonidentical reference signals with bounded tracking errors. A simulation example is given to demonstrate the effectiveness of the theoretical results.

  8. Cocaine- and amphetamine-regulated transcript promotes the differentiation of mouse bone marrow-derived mesenchymal stem cells into neural cells

    Directory of Open Access Journals (Sweden)

    Jin Jiali

    2011-07-01

    Full Text Available Abstract Background Neural tissue has limited potential to self-renew after neurological damage. Cell therapy using BM-MSCs (bone marrow mesenchymal stromal cells seems like a promising approach for the treatment of neurological diseases. However, the neural differentiation of stem cells influenced by massive factors and interactions is not well studied at present. Results In this work, we isolated and identified MSCs from mouse bone marrow. Co-cultured with CART (0.4 nM for six days, BM-MSCs were differentiated into neuron-like cells by the observation of optical microscopy. Immunofluorescence demonstrated that the differentiated BM-MSCs expressed neural specific markers including MAP-2, Nestin, NeuN and GFAP. In addition, NeuN positive cells could co-localize with TH or ChAT by double-labled immunofluorescence and Nissl bodies were found in several differentiated cells by Nissl stain. Furthermore, BDNF and NGF were increased by CART using RT-PCR. Conclusion This study demonstrated that CART could promote the differentiation of BM-MSCs into neural cells through increasing neurofactors, including BNDF and NGF. Combined application of CART and BM-MSCs may be a promising cell-based therapy for neurological diseases.

  9. Maximing Learning Strategies to Promote Learner Autonomy

    Directory of Open Access Journals (Sweden)

    Junaidi Mistar

    2001-01-01

    Full Text Available Learning a new language is ultimately to be able to communicate with it. Encouraging a sense of responsibility on the part of the learners is crucial for training them to be proficient communicators. As such, understanding the strategies that they employ in acquiring the language skill is important to come to ideas of how to promote learner autonomy. Research recently conducted with three different groups of learners of English at the tertiary education level in Malang indicated that they used metacognitive and social startegies at a high frequency, while memory, cognitive, conpensation, and affective strategies were exercised at a medium frewuency. This finding implies that the learners have acquired some degrees of autonomy because metacognive strategies requires them to independently make plans for their learning activities as well as evaluate the progress, and social strategies requires them to independently enhance communicative interactions with other people. Further actions are then to be taken increase their learning autonomy, that is by intensifying the practice of use of the other four strategy categories, which are not yet applied intensively.

  10. Neural networks, nativism, and the plausibility of constructivism.

    Science.gov (United States)

    Quartz, S R

    1993-09-01

    Recent interest in PDP (parallel distributed processing) models is due in part to the widely held belief that they challenge many of the assumptions of classical cognitive science. In the domain of language acquisition, for example, there has been much interest in the claim that PDP models might undermine nativism. Related arguments based on PDP learning have also been given against Fodor's anti-constructivist position--a position that has contributed to the widespread dismissal of constructivism. A limitation of many of the claims regarding PDP learning, however, is that the principles underlying this learning have not been rigorously characterized. In this paper, I examine PDP models from within the framework of Valiant's PAC (probably approximately correct) model of learning, now the dominant model in machine learning, and which applies naturally to neural network learning. From this perspective, I evaluate the implications of PDP models for nativism and Fodor's influential anti-constructivist position. In particular, I demonstrate that, contrary to a number of claims, PDP models are nativist in a robust sense. I also demonstrate that PDP models actually serve as a good illustration of Fodor's anti-constructivist position. While these results may at first suggest that neural network models in general are incapable of the sort of concept acquisition that is required to refute Fodor's anti-constructivist position, I suggest that there is an alternative form of neural network learning that demonstrates the plausibility of constructivism. This alternative form of learning is a natural interpretation of the constructivist position in terms of neural network learning, as it employs learning algorithms that incorporate the addition of structure in addition to weight modification schemes. By demonstrating that there is a natural and plausible interpretation of constructivism in terms of neural network learning, the position that nativism is the only plausible model of

  11. Deep learning with convolutional neural networks for EEG decoding and visualization.

    Science.gov (United States)

    Schirrmeister, Robin Tibor; Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio

    2017-11-01

    Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  12. Deep learning with convolutional neural networks for EEG decoding and visualization

    Science.gov (United States)

    Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio

    2017-01-01

    Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017. © 2017 Wiley Periodicals, Inc. PMID:28782865

  13. Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks

    Science.gov (United States)

    Clune, Jeff

    2017-01-01

    A long-term goal of AI is to produce agents that can learn a diversity of skills throughout their lifetimes and continuously improve those skills via experience. A longstanding obstacle towards that goal is catastrophic forgetting, which is when learning new information erases previously learned information. Catastrophic forgetting occurs in artificial neural networks (ANNs), which have fueled most recent advances in AI. A recent paper proposed that catastrophic forgetting in ANNs can be reduced by promoting modularity, which can limit forgetting by isolating task information to specific clusters of nodes and connections (functional modules). While the prior work did show that modular ANNs suffered less from catastrophic forgetting, it was not able to produce ANNs that possessed task-specific functional modules, thereby leaving the main theory regarding modularity and forgetting untested. We introduce diffusion-based neuromodulation, which simulates the release of diffusing, neuromodulatory chemicals within an ANN that can modulate (i.e. up or down regulate) learning in a spatial region. On the simple diagnostic problem from the prior work, diffusion-based neuromodulation 1) induces task-specific learning in groups of nodes and connections (task-specific localized learning), which 2) produces functional modules for each subtask, and 3) yields higher performance by eliminating catastrophic forgetting. Overall, our results suggest that diffusion-based neuromodulation promotes task-specific localized learning and functional modularity, which can help solve the challenging, but important problem of catastrophic forgetting. PMID:29145413

  14. Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks.

    Directory of Open Access Journals (Sweden)

    Roby Velez

    Full Text Available A long-term goal of AI is to produce agents that can learn a diversity of skills throughout their lifetimes and continuously improve those skills via experience. A longstanding obstacle towards that goal is catastrophic forgetting, which is when learning new information erases previously learned information. Catastrophic forgetting occurs in artificial neural networks (ANNs, which have fueled most recent advances in AI. A recent paper proposed that catastrophic forgetting in ANNs can be reduced by promoting modularity, which can limit forgetting by isolating task information to specific clusters of nodes and connections (functional modules. While the prior work did show that modular ANNs suffered less from catastrophic forgetting, it was not able to produce ANNs that possessed task-specific functional modules, thereby leaving the main theory regarding modularity and forgetting untested. We introduce diffusion-based neuromodulation, which simulates the release of diffusing, neuromodulatory chemicals within an ANN that can modulate (i.e. up or down regulate learning in a spatial region. On the simple diagnostic problem from the prior work, diffusion-based neuromodulation 1 induces task-specific learning in groups of nodes and connections (task-specific localized learning, which 2 produces functional modules for each subtask, and 3 yields higher performance by eliminating catastrophic forgetting. Overall, our results suggest that diffusion-based neuromodulation promotes task-specific localized learning and functional modularity, which can help solve the challenging, but important problem of catastrophic forgetting.

  15. Learning, memory, and the role of neural network architecture.

    Directory of Open Access Journals (Sweden)

    Ann M Hermundstad

    2011-06-01

    Full Text Available The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  16. Win-stay-lose-learn promotes cooperation in the prisoner's dilemma game with voluntary participation.

    Directory of Open Access Journals (Sweden)

    Chen Chu

    Full Text Available Voluntary participation, demonstrated to be a simple yet effective mechanism to promote persistent cooperative behavior, has been extensively studied. It has also been verified that the aspiration-based win-stay-lose-learn strategy updating rule promotes the evolution of cooperation. Inspired by this well-known fact, we combine the Win-Stay-Lose-Learn updating rule with voluntary participation: Players maintain their strategies when they are satisfied, or players attempt to imitate the strategy of one randomly chosen neighbor. We find that this mechanism maintains persistent cooperative behavior, even further promotes the evolution of cooperation under certain conditions.

  17. DCS-Neural-Network Program for Aircraft Control and Testing

    Science.gov (United States)

    Jorgensen, Charles C.

    2006-01-01

    A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.

  18. Algebraic and adaptive learning in neural control systems

    Science.gov (United States)

    Ferrari, Silvia

    A systematic approach is developed for designing adaptive and reconfigurable nonlinear control systems that are applicable to plants modeled by ordinary differential equations. The nonlinear controller comprising a network of neural networks is taught using a two-phase learning procedure realized through novel techniques for initialization, on-line training, and adaptive critic design. A critical observation is that the gradients of the functions defined by the neural networks must equal corresponding linear gain matrices at chosen operating points. On-line training is based on a dual heuristic adaptive critic architecture that improves control for large, coupled motions by accounting for actual plant dynamics and nonlinear effects. An action network computes the optimal control law; a critic network predicts the derivative of the cost-to-go with respect to the state. Both networks are algebraically initialized based on prior knowledge of satisfactory pointwise linear controllers and continue to adapt on line during full-scale simulations of the plant. On-line training takes place sequentially over discrete periods of time and involves several numerical procedures. A backpropagating algorithm called Resilient Backpropagation is modified and successfully implemented to meet these objectives, without excessive computational expense. This adaptive controller is as conservative as the linear designs and as effective as a global nonlinear controller. The method is successfully implemented for the full-envelope control of a six-degree-of-freedom aircraft simulation. The results show that the on-line adaptation brings about improved performance with respect to the initialization phase during aircraft maneuvers that involve large-angle and coupled dynamics, and parameter variations.

  19. Memory and learning in a class of neural network models

    International Nuclear Information System (INIS)

    Wallace, D.J.

    1986-01-01

    The author discusses memory and learning properties of the neural network model now identified with Hopfield's work. The model, how it attempts to abstract some key features of the nervous system, and the sense in which learning and memory are identified in the model are described. A brief report is presented on the important role of phase transitions in the model and their implications for memory capacity. The results of numerical simulations obtained using the ICL Distributed Array Processors at Edinburgh are presented. A summary is presented on how the fraction of images which are perfectly stored, depends on the number of nodes and the number of nominal images which one attempts to store using the prescription in Hopfield's paper. Results are presented on the second phase transition in the model, which corresponds to almost total loss of storage capacity as the number of nominal images is increased. Results are given on the performance of a new iterative algorithm for exact storage of up to N images in an N node model

  20. Convolutional neural network with transfer learning for rice type classification

    Science.gov (United States)

    Patel, Vaibhav Amit; Joshi, Manjunath V.

    2018-04-01

    Presently, rice type is identified manually by humans, which is time consuming and error prone. Therefore, there is a need to do this by machine which makes it faster with greater accuracy. This paper proposes a deep learning based method for classification of rice types. We propose two methods to classify the rice types. In the first method, we train a deep convolutional neural network (CNN) using the given segmented rice images. In the second method, we train a combination of a pretrained VGG16 network and the proposed method, while using transfer learning in which the weights of a pretrained network are used to achieve better accuracy. Our approach can also be used for classification of rice grain as broken or fine. We train a 5-class model for classifying rice types using 4000 training images and another 2- class model for the classification of broken and normal rice using 1600 training images. We observe that despite having distinct rice images, our architecture, pretrained on ImageNet data boosts classification accuracy significantly.

  1. Psychological theory and pedagogical effectiveness: the learning promotion potential framework.

    Science.gov (United States)

    Tomlinson, Peter

    2008-12-01

    After a century of educational psychology, eminent commentators are still lamenting problems besetting the appropriate relating of psychological insights to teaching design, a situation not helped by the persistence of crude assumptions concerning the nature of pedagogical effectiveness. To propose an analytical or meta-theoretical framework based on the concept of learning promotion potential (LPP) as a basis for understanding the basic relationship between psychological insights and teaching strategies, and to draw out implications for psychology-based pedagogical design, development and research. This is a theoretical and meta-theoretical paper relying mainly on conceptual analysis, though also calling on psychological theory and research. Since teaching consists essentially in activity designed to promote learning, it follows that a teaching strategy has the potential in principle to achieve particular kinds of learning gains (LPP) to the extent that it embodies or stimulates the relevant learning processes on the part of learners and enables the teacher's functions of on-line monitoring and assistance for such learning processes. Whether a teaching strategy actually does realize its LPP by way of achieving its intended learning goals depends also on the quality of its implementation, in conjunction with other factors in the situated interaction that teaching always involves. The core role of psychology is to provide well-grounded indication of the nature of such learning processes and the teaching functions that support them, rather than to directly generate particular ways of teaching. A critically eclectic stance towards potential sources of psychological insight is argued for. Applying this framework, the paper proposes five kinds of issue to be attended to in the design and evaluation of psychology-based pedagogy. Other work proposing comparable ideas is briefly reviewed, with particular attention to similarities and a key difference with the ideas of Oser

  2. Language Learning Enhanced by Massive Multiple Online Role-Playing Games (MMORPGs) and the Underlying Behavioral and Neural Mechanisms

    OpenAIRE

    Zhang, Yongjun; Song, Hongwen; Liu, Xiaoming; Tang, Dinghong; Chen, Yue-e; Zhang, Xiaochu

    2017-01-01

    Massive Multiple Online Role-Playing Games (MMORPGs) have increased in popularity among children, juveniles, and adults since MMORPGs’ appearance in this digital age. MMORPGs can be applied to enhancing language learning, which is drawing researchers’ attention from different fields and many studies have validated MMORPGs’ positive effect on language learning. However, there are few studies on the underlying behavioral or neural mechanism of such effect. This paper reviews the educational app...

  3. The impact of iconic gestures on foreign language word learning and its neural substrate.

    Science.gov (United States)

    Macedonia, Manuela; Müller, Karsten; Friederici, Angela D

    2011-06-01

    Vocabulary acquisition represents a major challenge in foreign language learning. Research has demonstrated that gestures accompanying speech have an impact on memory for verbal information in the speakers' mother tongue and, as recently shown, also in foreign language learning. However, the neural basis of this effect remains unclear. In a within-subjects design, we compared learning of novel words coupled with iconic and meaningless gestures. Iconic gestures helped learners to significantly better retain the verbal material over time. After the training, participants' brain activity was registered by means of fMRI while performing a word recognition task. Brain activations to words learned with iconic and with meaningless gestures were contrasted. We found activity in the premotor cortices for words encoded with iconic gestures. In contrast, words encoded with meaningless gestures elicited a network associated with cognitive control. These findings suggest that memory performance for newly learned words is not driven by the motor component as such, but by the motor image that matches an underlying representation of the word's semantics. Copyright © 2010 Wiley-Liss, Inc.

  4. Do Convolutional Neural Networks Learn Class Hierarchy?

    Science.gov (United States)

    Bilal, Alsallakh; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2018-01-01

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  5. Promoting Social Change through Service-Learning in the Curriculum

    Science.gov (United States)

    Bowen, Glenn A.

    2014-01-01

    Service-learning is a high-impact pedagogical strategy embraced by higher education institutions. Direct service based on a charity paradigm tends to be the norm, while little attention is paid to social change-oriented service. This article offers suggestions for incorporating social justice education into courses designed to promote social…

  6. HIERtalker: A default hierarchy of high order neural networks that learns to read English aloud

    Energy Technology Data Exchange (ETDEWEB)

    An, Z.G.; Mniszewski, S.M.; Lee, Y.C.; Papcun, G.; Doolen, G.D.

    1988-01-01

    A new learning algorithm based on a default hierarchy of high order neural networks has been developed that is able to generalize as well as handle exceptions. It learns the ''building blocks'' or clusters of symbols in a stream that appear repeatedly and convey certain messages. The default hierarchy prevents a combinatoric explosion of rules. A simulator of such a hierarchy, HIERtalker, has been applied to the conversion of English words to phonemes. Achieved accuracy is 99% for trained words and ranges from 76% to 96% for sets of new words. 8 refs., 4 figs., 1 tab.

  7. Promoting Female Students' Learning Motivation towards Science by Exercising Hands-On Activities

    Science.gov (United States)

    Wen-jin, Kuo; Chia-ju, Liu; Shi-an, Leou

    2012-01-01

    The purpose of this study is to design different hands-on science activities and investigate which activities could better promote female students' learning motivation towards science. This study conducted three types of science activities which contains nine hands-on activities, an experience scale and a learning motivation scale for data…

  8. Neural network error correction for solving coupled ordinary differential equations

    Science.gov (United States)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  9. Comment on 'Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study'.

    Science.gov (United States)

    Valdes, Gilmer; Interian, Yannet

    2018-03-15

    The application of machine learning (ML) presents tremendous opportunities for the field of oncology, thus we read 'Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study' with great interest. In this article, the authors used state of the art techniques: a pre-trained convolutional neural network (VGG-16 CNN), transfer learning, data augmentation, drop out and early stopping, all of which are directly responsible for the success and the excitement that these algorithms have created in other fields. We believe that the use of these techniques can offer tremendous opportunities in the field of Medical Physics and as such we would like to praise the authors for their pioneering application to the field of Radiation Oncology. That being said, given that the field of Medical Physics has unique characteristics that differentiate us from those fields where these techniques have been applied successfully, we would like to raise some points for future discussion and follow up studies that could help the community understand the limitations and nuances of deep learning techniques.

  10. Habituation in non-neural organisms: evidence from slime moulds.

    Science.gov (United States)

    Boisseau, Romain P; Vogel, David; Dussutour, Audrey

    2016-04-27

    Learning, defined as a change in behaviour evoked by experience, has hitherto been investigated almost exclusively in multicellular neural organisms. Evidence for learning in non-neural multicellular organisms is scant, and only a few unequivocal reports of learning have been described in single-celled organisms. Here we demonstrate habituation, an unmistakable form of learning, in the non-neural organism Physarum polycephalum In our experiment, using chemotaxis as the behavioural output and quinine or caffeine as the stimulus, we showed that P. polycephalum learnt to ignore quinine or caffeine when the stimuli were repeated, but responded again when the stimulus was withheld for a certain time. Our results meet the principle criteria that have been used to demonstrate habituation: responsiveness decline and spontaneous recovery. To distinguish habituation from sensory adaptation or motor fatigue, we also show stimulus specificity. Our results point to the diversity of organisms lacking neurons, which likely display a hitherto unrecognized capacity for learning, and suggest that slime moulds may be an ideal model system in which to investigate fundamental mechanisms underlying learning processes. Besides, documenting learning in non-neural organisms such as slime moulds is centrally important to a comprehensive, phylogenetic understanding of when and where in the tree of life the earliest manifestations of learning evolved. © 2016 The Author(s).

  11. Evaluating the negative or valuing the positive? Neural mechanisms supporting feedback-based learning across development.

    Science.gov (United States)

    van Duijvenvoorde, Anna C K; Zanolie, Kiki; Rombouts, Serge A R B; Raijmakers, Maartje E J; Crone, Eveline A

    2008-09-17

    How children learn from positive and negative performance feedback lies at the foundation of successful learning and is therefore of great importance for educational practice. In this study, we used functional magnetic resonance imaging (fMRI) to examine the neural developmental changes related to feedback-based learning when performing a rule search and application task. Behavioral results from three age groups (8-9, 11-13, and 18-25 years of age) demonstrated that, compared with adults, 8- to 9-year-old children performed disproportionally more inaccurately after receiving negative feedback relative to positive feedback. Additionally, imaging data pointed toward a qualitative difference in how children and adults use performance feedback. That is, dorsolateral prefrontal cortex and superior parietal cortex were more active after negative feedback for adults, but after positive feedback for children (8-9 years of age). For 11- to 13-year-olds, these regions did not show differential feedback sensitivity, suggesting that the transition occurs around this age. Pre-supplementary motor area/anterior cingulate cortex, in contrast, was more active after negative feedback in both 11- to 13-year-olds and adults, but not 8- to 9-year-olds. Together, the current data show that cognitive control areas are differentially engaged during feedback-based learning across development. Adults engage these regions after signals of response adjustment (i.e., negative feedback). Young children engage these regions after signals of response continuation (i.e., positive feedback). The neural activation patterns found in 11- to 13-year-olds indicate a transition around this age toward an increased influence of negative feedback on performance adjustment. This is the first developmental fMRI study to compare qualitative changes in brain activation during feedback learning across distinct stages of development.

  12. Embodied learning of a generative neural model for biological motion perception and inference.

    Science.gov (United States)

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  13. Embodied Learning of a Generative Neural Model for Biological Motion Perception and Inference

    Directory of Open Access Journals (Sweden)

    Fabian eSchrodt

    2015-07-01

    Full Text Available Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  14. Neural constructivism or self-organization?

    NARCIS (Netherlands)

    van der Maas, H.L.J.; Molenaar, P.C.M.

    2000-01-01

    Comments on the article by S. R. Quartz et al (see record 1998-00749-001) which discussed the constructivist perspective of interaction between cognition and neural processes during development and consequences for theories of learning. Three arguments are given to show that neural constructivism

  15. Pol II promoter prediction using characteristic 4-mer motifs: a machine learning approach

    Directory of Open Access Journals (Sweden)

    Shoyaib Mohammad

    2008-10-01

    Full Text Available Abstract Background Eukaryotic promoter prediction using computational analysis techniques is one of the most difficult jobs in computational genomics that is essential for constructing and understanding genetic regulatory networks. The increased availability of sequence data for various eukaryotic organisms in recent years has necessitated for better tools and techniques for the prediction and analysis of promoters in eukaryotic sequences. Many promoter prediction methods and tools have been developed to date but they have yet to provide acceptable predictive performance. One obvious criteria to improve on current methods is to devise a better system for selecting appropriate features of promoters that distinguish them from non-promoters. Secondly improved performance can be achieved by enhancing the predictive ability of the machine learning algorithms used. Results In this paper, a novel approach is presented in which 128 4-mer motifs in conjunction with a non-linear machine-learning algorithm utilising a Support Vector Machine (SVM are used to distinguish between promoter and non-promoter DNA sequences. By applying this approach to plant, Drosophila, human, mouse and rat sequences, the classification model has showed 7-fold cross-validation percentage accuracies of 83.81%, 94.82%, 91.25%, 90.77% and 82.35% respectively. The high sensitivity and specificity value of 0.86 and 0.90 for plant; 0.96 and 0.92 for Drosophila; 0.88 and 0.92 for human; 0.78 and 0.84 for mouse and 0.82 and 0.80 for rat demonstrate that this technique is less prone to false positive results and exhibits better performance than many other tools. Moreover, this model successfully identifies location of promoter using TATA weight matrix. Conclusion The high sensitivity and specificity indicate that 4-mer frequencies in conjunction with supervised machine-learning methods can be beneficial in the identification of RNA pol II promoters comparative to other methods. This

  16. Behavioural and neural basis of anomalous motor learning in children with autism.

    Science.gov (United States)

    Marko, Mollie K; Crocetti, Deana; Hulst, Thomas; Donchin, Opher; Shadmehr, Reza; Mostofsky, Stewart H

    2015-03-01

    Autism spectrum disorder is a developmental disorder characterized by deficits in social and communication skills and repetitive and stereotyped interests and behaviours. Although not part of the diagnostic criteria, individuals with autism experience a host of motor impairments, potentially due to abnormalities in how they learn motor control throughout development. Here, we used behavioural techniques to quantify motor learning in autism spectrum disorder, and structural brain imaging to investigate the neural basis of that learning in the cerebellum. Twenty children with autism spectrum disorder and 20 typically developing control subjects, aged 8-12, made reaching movements while holding the handle of a robotic manipulandum. In random trials the reach was perturbed, resulting in errors that were sensed through vision and proprioception. The brain learned from these errors and altered the motor commands on the subsequent reach. We measured learning from error as a function of the sensory modality of that error, and found that children with autism spectrum disorder outperformed typically developing children when learning from errors that were sensed through proprioception, but underperformed typically developing children when learning from errors that were sensed through vision. Previous work had shown that this learning depends on the integrity of a region in the anterior cerebellum. Here we found that the anterior cerebellum, extending into lobule VI, and parts of lobule VIII were smaller than normal in children with autism spectrum disorder, with a volume that was predicted by the pattern of learning from visual and proprioceptive errors. We suggest that the abnormal patterns of motor learning in children with autism spectrum disorder, showing an increased sensitivity to proprioceptive error and a decreased sensitivity to visual error, may be associated with abnormalities in the cerebellum. © The Author (2015). Published by Oxford University Press on behalf

  17. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  18. Amygdala subsystems and control of feeding behavior by learned cues.

    Science.gov (United States)

    Petrovich, Gorica D; Gallagher, Michela

    2003-04-01

    A combination of behavioral studies and a neural systems analysis approach has proven fruitful in defining the role of the amygdala complex and associated circuits in fear conditioning. The evidence presented in this chapter suggests that this approach is also informative in the study of other adaptive functions that involve the amygdala. In this chapter we present a novel model to study learning in an appetitive context. Furthermore, we demonstrate that long-recognized connections between the amygdala and the hypothalamus play a crucial role in allowing learning to modulate feeding behavior. In the first part we describe a behavioral model for motivational learning. In this model a cue that acquires motivational properties through pairings with food delivery when an animal is hungry can override satiety and promote eating in sated rats. Next, we present evidence that a specific amygdala subsystem (basolateral area) is responsible for allowing such learned cues to control eating (override satiety and promote eating in sated rats). We also show that basolateral amygdala mediates these actions via connectivity with the lateral hypothalamus. Lastly, we present evidence that the amygdalohypothalamic system is specific for the control of eating by learned motivational cues, as it does not mediate another function that depends on intact basolateral amygdala, namely, the ability of a conditioned cue to support new learning based on its acquired value. Knowledge about neural systems through which food-associated cues specifically control feeding behavior provides a defined model for the study of learning. In addition, this model may be informative for understanding mechanisms of maladaptive aspects of learned control of eating that contribute to eating disorders and more moderate forms of overeating.

  19. Neural network to diagnose lining condition

    Science.gov (United States)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  20. Promoting Cooperative Learning in the Classroom: Comparing Explicit and Implicit Training Techniques

    Directory of Open Access Journals (Sweden)

    Anne Elliott

    2003-07-01

    Full Text Available In this study, we investigated whether providing 4th and 5th-grade students with explicit instruction in prerequisite cooperative-learning skills and techniques would enhance their academic performance and promote in them positive attitudes towards cooperative learning. Overall, students who received explicit training outperformed their peers on both the unit project and test and presented more favourable attitudes towards cooperative learning. The findings of this study support the use of explicitly instructing students about the components of cooperative learning prior to engaging in collaborative activities. Implications for teacher-education are discussed.

  1. IMPLEMENTATION OF NEURAL - CRYPTOGRAPHIC SYSTEM USING FPGA

    Directory of Open Access Journals (Sweden)

    KARAM M. Z. OTHMAN

    2011-08-01

    Full Text Available Modern cryptography techniques are virtually unbreakable. As the Internet and other forms of electronic communication become more prevalent, electronic security is becoming increasingly important. Cryptography is used to protect e-mail messages, credit card information, and corporate data. The design of the cryptography system is a conventional cryptography that uses one key for encryption and decryption process. The chosen cryptography algorithm is stream cipher algorithm that encrypt one bit at a time. The central problem in the stream-cipher cryptography is the difficulty of generating a long unpredictable sequence of binary signals from short and random key. Pseudo random number generators (PRNG have been widely used to construct this key sequence. The pseudo random number generator was designed using the Artificial Neural Networks (ANN. The Artificial Neural Networks (ANN providing the required nonlinearity properties that increases the randomness statistical properties of the pseudo random generator. The learning algorithm of this neural network is backpropagation learning algorithm. The learning process was done by software program in Matlab (software implementation to get the efficient weights. Then, the learned neural network was implemented using field programmable gate array (FPGA.

  2. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  3. How synapses can enhance sensibility of a neural network

    Science.gov (United States)

    Protachevicz, P. R.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Baptista, M. S.; Viana, R. L.; Lameu, E. L.; Macau, E. E. N.; Batista, A. M.

    2018-02-01

    In this work, we study the dynamic range in a neural network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. The learning rules are related to neuroplasticity that describes change to the neural connections in the brain. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.

  4. Learning free energy landscapes using artificial neural networks.

    Science.gov (United States)

    Sidky, Hythem; Whitmer, Jonathan K

    2018-03-14

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  5. Learning free energy landscapes using artificial neural networks

    Science.gov (United States)

    Sidky, Hythem; Whitmer, Jonathan K.

    2018-03-01

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  6. Contribution diversity and incremental learning promote cooperation in public goods games

    Science.gov (United States)

    Liu, Penghui; Liu, Jing

    2017-11-01

    Understanding the evolution of cooperation in nature has long been a challenge and how to promote cooperation in public goods games (PGG) has attracted lots of attention recently. Social diversity has been found helpful to explain the emergence of cooperation in the absence of reputation and punishment. However, further refinement on how individuals reallocate their contribution to each PGG remains an open question. Moreover, individuals in existing works mostly teach or learn from neighbors according to their payoff in the last generation only. However, individuals in reality are preferred to learn from others with a long-term good performance. Therefore, in this paper, a new contribution diversity (CD) is designed and incremental learning (IL) is introduced. We investigate how these two may influence the evolution of cooperation in PGG. Based on the simulation results, we found that both the CD and IL can promote the cooperation in PGGs. Moreover, when cooperators are shaken in their strategy, CD may fail in reallocating contribution of individuals properly. However, IL is found effective to stabilize faith of cooperators and cooperators under IL reflect a long-term advantage over defectors in terms of benefits. Therefore, we further find IL and CD can mutually benefit each other in promoting cooperation, as CD can reasonably adjust the investment of cooperators while IL can provide more information to CD.

  7. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  8. Fibronectin promotes differentiation of neural crest progenitors endowed with smooth muscle cell potential

    International Nuclear Information System (INIS)

    Costa-Silva, Bruno; Coelho da Costa, Meline; Melo, Fernanda Rosene; Neves, Cynara Mendes; Alvarez-Silva, Marcio; Calloni, Giordano Wosgrau; Trentin, Andrea Goncalves

    2009-01-01

    The neural crest (NC) is a model system used to investigate multipotency during vertebrate development. Environmental factors control NC cell fate decisions. Despite the well-known influence of extracellular matrix molecules in NC cell migration, the issue of whether they also influence NC cell differentiation has not been addressed at the single cell level. By analyzing mass and clonal cultures of mouse cephalic and quail trunk NC cells, we show for the first time that fibronectin (FN) promotes differentiation into the smooth muscle cell phenotype without affecting differentiation into glia, neurons, and melanocytes. Time course analysis indicated that the FN-induced effect was not related to massive cell death or proliferation of smooth muscle cells. Finally, by comparing clonal cultures of quail trunk NC cells grown on FN and collagen type IV (CLIV), we found that FN strongly increased both NC cell survival and the proportion of unipotent and oligopotent NC progenitors endowed with smooth muscle potential. In contrast, melanocytic progenitors were prominent in clonogenic NC cells grown on CLIV. Taken together, these results show that FN promotes NC cell differentiation along the smooth muscle lineage, and therefore plays an important role in fate decisions of NC progenitor cells

  9. Extreme learning machines 2013 algorithms and applications

    CERN Document Server

    Toh, Kar-Ann; Romay, Manuel; Mao, Kezhi

    2014-01-01

    In recent years, ELM has emerged as a revolutionary technique of computational intelligence, and has attracted considerable attentions. An extreme learning machine (ELM) is a single layer feed-forward neural network alike learning system, whose connections from the input layer to the hidden layer are randomly generated, while the connections from the hidden layer to the output layer are learned through linear learning methods. The outstanding merits of extreme learning machine (ELM) are its fast learning speed, trivial human intervene and high scalability.   This book contains some selected papers from the International Conference on Extreme Learning Machine 2013, which was held in Beijing China, October 15-17, 2013. This conference aims to bring together the researchers and practitioners of extreme learning machine from a variety of fields including artificial intelligence, biomedical engineering and bioinformatics, system modelling and control, and signal and image processing, to promote research and discu...

  10. A Comparative Classification of Wheat Grains for Artificial Neural Network and Extreme Learning Machine

    OpenAIRE

    ASLAN, Muhammet Fatih; SABANCI, Kadir; YİĞİT, Enes; KAYABAŞI, Ahmet; TOKTAŞ, Abdurrahim; DUYSAK, Hüseyin

    2018-01-01

    In this study, classification of two types of wheat grainsinto bread and durum was carried out. The species of wheat grains in thisdataset are bread and durum and these species have equal samples in the datasetas 100 instances. Seven features, including width, height, area, perimeter,roundness, width and perimeter/area were extracted from each wheat grains. Classificationwas separately conducted by Artificial Neural Network (ANN) and Extreme Learning Machine (ELM)artificial intelligence techn...

  11. Neural controller for adaptive movements with unforeseen payloads.

    Science.gov (United States)

    Kuperstein, M; Wang, J

    1990-01-01

    A theory and computer simulation of a neural controller that learns to move and position a link carrying an unforeseen payload accurately are presented. The neural controller learns adaptive dynamic control from its own experience. It does not use information about link mass, link length, or direction of gravity, and it uses only indirect uncalibrated information about payload and actuator limits. Its average positioning accuracy across a large range of payloads after learning is 3% of the positioning range. This neural controller can be used as a basis for coordinating any number of sensory inputs with limbs of any number of joints. The feedforward nature of control allows parallel implementation in real time across multiple joints.

  12. Evolutionary pruning of transfer learned deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis.

    Science.gov (United States)

    Samala, Ravi K; Chan, Heang-Ping; Hadjiiski, Lubomir M; Helvie, Mark A; Richter, Caleb; Cha, Kenny

    2018-05-01

    Deep learning models are highly parameterized, resulting in difficulty in inference and transfer learning for image recognition tasks. In this work, we propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in digital breast tomosynthesis (DBT). The objective is to prune the number of tunable parameters while preserving the classification accuracy. In the first stage transfer learning, 19 632 augmented regions-of-interest (ROIs) from 2454 mass lesions on mammograms were used to train a pre-trained DCNN on ImageNet. In the second stage transfer learning, the DCNN was used as a feature extractor followed by feature selection and random forest classification. The pathway evolution was performed using genetic algorithm in an iterative approach with tournament selection driven by count-preserving crossover and mutation. The second stage was trained with 9120 DBT ROIs from 228 mass lesions using leave-one-case-out cross-validation. The DCNN was reduced by 87% in the number of neurons, 34% in the number of parameters, and 95% in the number of multiply-and-add operations required in the convolutional layers. The test AUC on 89 mass lesions from 94 independent DBT cases before and after pruning were 0.88 and 0.90, respectively, and the difference was not statistically significant (p  >  0.05). The proposed DCNN compression approach can reduce the number of required operations by 95% while maintaining the classification performance. The approach can be extended to other deep neural networks and imaging tasks where transfer learning is appropriate.

  13. Evolutionary pruning of transfer learned deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis

    Science.gov (United States)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Helvie, Mark A.; Richter, Caleb; Cha, Kenny

    2018-05-01

    Deep learning models are highly parameterized, resulting in difficulty in inference and transfer learning for image recognition tasks. In this work, we propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in digital breast tomosynthesis (DBT). The objective is to prune the number of tunable parameters while preserving the classification accuracy. In the first stage transfer learning, 19 632 augmented regions-of-interest (ROIs) from 2454 mass lesions on mammograms were used to train a pre-trained DCNN on ImageNet. In the second stage transfer learning, the DCNN was used as a feature extractor followed by feature selection and random forest classification. The pathway evolution was performed using genetic algorithm in an iterative approach with tournament selection driven by count-preserving crossover and mutation. The second stage was trained with 9120 DBT ROIs from 228 mass lesions using leave-one-case-out cross-validation. The DCNN was reduced by 87% in the number of neurons, 34% in the number of parameters, and 95% in the number of multiply-and-add operations required in the convolutional layers. The test AUC on 89 mass lesions from 94 independent DBT cases before and after pruning were 0.88 and 0.90, respectively, and the difference was not statistically significant (p  >  0.05). The proposed DCNN compression approach can reduce the number of required operations by 95% while maintaining the classification performance. The approach can be extended to other deep neural networks and imaging tasks where transfer learning is appropriate.

  14. Global and local missions of cAMP signaling in neural plasticity, learning and memory

    Directory of Open Access Journals (Sweden)

    Daewoo eLee

    2015-08-01

    Full Text Available The fruit fly Drosophila melanogaster has been a popular model to study cAMP signaling and resultant behaviors due to its powerful genetic approaches. All molecular components (AC, PDE, PKA, CREB, etc essential for cAMP signaling have been identified in the fly. Among them, adenylyl cyclase (AC gene rutabaga and phosphodiesterase (PDE gene dunce have been intensively studied to understand the role of cAMP signaling. Interestingly, these two mutant genes were originally identified on the basis of associative learning deficits. This commentary summarizes findings on the role of cAMP in Drosophila neuronal excitability, synaptic plasticity and memory. It mainly focuses on two distinct mechanisms (global versus local regulating excitatory and inhibitory synaptic plasticity related to cAMP homeostasis. This dual regulatory role of cAMP is to increase the strength of excitatory neural circuits on one hand, but to act locally on postsynaptic GABA receptors to decrease inhibitory synaptic plasticity on the other. Thus the action of cAMP could result in a global increase in the neural circuit excitability and memory. Implications of this cAMP signaling related to drug discovery for neural diseases are also described.

  15. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    Science.gov (United States)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  16. YF22 Model With On-Board On-Line Learning Microprocessors-Based Neural Algorithms for Autopilot and Fault-Tolerant Flight Control Systems

    National Research Council Canada - National Science Library

    Napolitano, Marcello

    2002-01-01

    This project focused on investigating the potential of on-line learning 'hardware-based' neural approximators and controllers to provide fault tolerance capabilities following sensor and actuator failures...

  17. Involving postgraduate's students in undergraduate small group teaching promotes active learning in both

    Science.gov (United States)

    Kalra, Ruchi; Modi, Jyoti Nath; Vyas, Rashmi

    2015-01-01

    Background: Lecture is a common traditional method for teaching, but it may not stimulate higher order thinking and students may also be hesitant to express and interact. The postgraduate (PG) students are less involved with undergraduate (UG) teaching. Team based small group active learning method can contribute to better learning experience. Aim: To-promote active learning skills among the UG students using small group teaching methods involving PG students as facilitators to impart hands-on supervised training in teaching and managerial skills. Methodology: After Institutional approval under faculty supervision 92 UGs and 8 PGs participated in 6 small group sessions utilizing the jigsaw technique. Feedback was collected from both. Observations: Undergraduate Feedback (Percentage of Students Agreed): Learning in small groups was a good experience as it helped in better understanding of the subject (72%), students explored multiple reading resources (79%), they were actively involved in self-learning (88%), students reported initial apprehension of performance (71%), identified their learning gaps (86%), team enhanced their learning process (71%), informal learning in place of lecture was a welcome change (86%), it improved their communication skills (82%), small group learning can be useful for future self-learning (75%). Postgraduate Feedback: Majority performed facilitation for first time, perceived their performance as good (75%), it was helpful in self-learning (100%), felt confident of managing students in small groups (100%), as facilitator they improved their teaching skills, found it more useful and better identified own learning gaps (87.5%). Conclusions: Learning in small groups adopting team based approach involving both UGs and PGs promoted active learning in both and enhanced the teaching skills of the PGs. PMID:26380201

  18. Spatial learning depends on both the addition and removal of new hippocampal neurons.

    Directory of Open Access Journals (Sweden)

    David Dupret

    2007-08-01

    Full Text Available The role of adult hippocampal neurogenesis in spatial learning remains a matter of debate. Here, we show that spatial learning modifies neurogenesis by inducing a cascade of events that resembles the selective stabilization process characterizing development. Learning promotes survival of relatively mature neurons, apoptosis of more immature cells, and finally, proliferation of neural precursors. These are three interrelated events mediating learning. Thus, blocking apoptosis impairs memory and inhibits learning-induced cell survival and cell proliferation. In conclusion, during learning, similar to the selective stabilization process, neuronal networks are sculpted by a tightly regulated selection and suppression of different populations of newly born neurons.

  19. Comment on ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’

    Science.gov (United States)

    Valdes, Gilmer; Interian, Yannet

    2018-03-01

    The application of machine learning (ML) presents tremendous opportunities for the field of oncology, thus we read ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’ with great interest. In this article, the authors used state of the art techniques: a pre-trained convolutional neural network (VGG-16 CNN), transfer learning, data augmentation, drop out and early stopping, all of which are directly responsible for the success and the excitement that these algorithms have created in other fields. We believe that the use of these techniques can offer tremendous opportunities in the field of Medical Physics and as such we would like to praise the authors for their pioneering application to the field of Radiation Oncology. That being said, given that the field of Medical Physics has unique characteristics that differentiate us from those fields where these techniques have been applied successfully, we would like to raise some points for future discussion and follow up studies that could help the community understand the limitations and nuances of deep learning techniques.

  20. A neural network-based exploratory learning and motor planning system for co-robots

    Directory of Open Access Journals (Sweden)

    Byron V Galbraith

    2015-07-01

    Full Text Available Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or learning by doing, an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  1. A neural network-based exploratory learning and motor planning system for co-robots.

    Science.gov (United States)

    Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  2. Language Learning Enhanced by Massive Multiple Online Role-Playing Games (MMORPGs) and the Underlying Behavioral and Neural Mechanisms

    Science.gov (United States)

    Zhang, Yongjun; Song, Hongwen; Liu, Xiaoming; Tang, Dinghong; Chen, Yue-e; Zhang, Xiaochu

    2017-01-01

    Massive Multiple Online Role-Playing Games (MMORPGs) have increased in popularity among children, juveniles, and adults since MMORPGs’ appearance in this digital age. MMORPGs can be applied to enhancing language learning, which is drawing researchers’ attention from different fields and many studies have validated MMORPGs’ positive effect on language learning. However, there are few studies on the underlying behavioral or neural mechanism of such effect. This paper reviews the educational application of the MMORPGs based on relevant macroscopic and microscopic studies, showing that gamers’ overall language proficiency or some specific language skills can be enhanced by real-time online interaction with peers and game narratives or instructions embedded in the MMORPGs. Mechanisms underlying the educational assistant role of MMORPGs in second language learning are discussed from both behavioral and neural perspectives. We suggest that attentional bias makes gamers/learners allocate more cognitive resources toward task-related stimuli in a controlled or an automatic way. Moreover, with a moderating role played by activation of reward circuit, playing the MMORPGs may strengthen or increase functional connectivity from seed regions such as left anterior insular/frontal operculum (AI/FO) and visual word form area to other language-related brain areas. PMID:28303097

  3. Language Learning Enhanced by Massive Multiple Online Role-Playing Games (MMORPGs) and the Underlying Behavioral and Neural Mechanisms.

    Science.gov (United States)

    Zhang, Yongjun; Song, Hongwen; Liu, Xiaoming; Tang, Dinghong; Chen, Yue-E; Zhang, Xiaochu

    2017-01-01

    Massive Multiple Online Role-Playing Games (MMORPGs) have increased in popularity among children, juveniles, and adults since MMORPGs' appearance in this digital age. MMORPGs can be applied to enhancing language learning, which is drawing researchers' attention from different fields and many studies have validated MMORPGs' positive effect on language learning. However, there are few studies on the underlying behavioral or neural mechanism of such effect. This paper reviews the educational application of the MMORPGs based on relevant macroscopic and microscopic studies, showing that gamers' overall language proficiency or some specific language skills can be enhanced by real-time online interaction with peers and game narratives or instructions embedded in the MMORPGs. Mechanisms underlying the educational assistant role of MMORPGs in second language learning are discussed from both behavioral and neural perspectives. We suggest that attentional bias makes gamers/learners allocate more cognitive resources toward task-related stimuli in a controlled or an automatic way. Moreover, with a moderating role played by activation of reward circuit, playing the MMORPGs may strengthen or increase functional connectivity from seed regions such as left anterior insular/frontal operculum (AI/FO) and visual word form area to other language-related brain areas.

  4. Biosignals learning and synthesis using deep neural networks.

    Science.gov (United States)

    Belo, David; Rodrigues, João; Vaz, João R; Pezarat-Correia, Pedro; Gamboa, Hugo

    2017-09-25

    Modeling physiological signals is a complex task both for understanding and synthesize biomedical signals. We propose a deep neural network model that learns and synthesizes biosignals, validated by the morphological equivalence of the original ones. This research could lead the creation of novel algorithms for signal reconstruction in heavily noisy data and source detection in biomedical engineering field. The present work explores the gated recurrent units (GRU) employed in the training of respiration (RESP), electromyograms (EMG) and electrocardiograms (ECG). Each signal is pre-processed, segmented and quantized in a specific number of classes, corresponding to the amplitude of each sample and fed to the model, which is composed by an embedded matrix, three GRU blocks and a softmax function. This network is trained by adjusting its internal parameters, acquiring the representation of the abstract notion of the next value based on the previous ones. The simulated signal was generated by forecasting a random value and re-feeding itself. The resulting generated signals are similar with the morphological expression of the originals. During the learning process, after a set of iterations, the model starts to grasp the basic morphological characteristics of the signal and later their cyclic characteristics. After training, these models' prediction are closer to the signals that trained them, specially the RESP and ECG. This synthesis mechanism has shown relevant results that inspire the use to characterize signals from other physiological sources.

  5. Evaluating the Visualization of What a Deep Neural Network Has Learned.

    Science.gov (United States)

    Samek, Wojciech; Binder, Alexander; Montavon, Gregoire; Lapuschkin, Sebastian; Muller, Klaus-Robert

    Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the "importance" of individual pixels with respect to the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets. Our main result is that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of the neural network performance.Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and

  6. Using Authentic Medication Errors to Promote Pharmacy Student Critical Thinking and Active Learning

    Directory of Open Access Journals (Sweden)

    Reza Karimi

    2018-01-01

    Full Text Available Objective: To promote first year (P1 pharmacy students’ awareness of medication error prevention and to support student learning in biomedical and pharmaceutical sciences. Innovation: A novel curricular activity was created and referred to as “Medication Errors and Sciences Applications (MESA”. The MESA activity encouraged discussions of patient safety among students and faculty to link medication errors to biomedical and pharmaceutical sciences, which ultimately reinforced student learning in P1 curricular topics.   Critical Analysis: Three P1 cohorts implemented the MESA activity and approximately 75% of students from each cohort completed a reliable assessment instrument. Each P1 cohort had at least 14 student teams who generated professional reports analyzing authentic medication errors. The quantitative assessment results indicated that 70-85% of students believed that the MESA activity improved student learning in biomedical and pharmaceutical sciences. More than 95% of students agreed that the MESA activity introduced them to medication errors. Approximately 90% of students agreed that the MESA activity integrated the knowledge and skills they developed through the P1 curriculum, promoted active learning and critical thinking, and encouraged students to be self-directed learners. Furthermore, our data indicated that approximately 90% of students stated that the achievement of Bloom’s taxonomy's six learning objectives was promoted by completing the MESA activity. Next Steps: Pharmacy students’ awareness of medication errors is a critical component of pharmacy education, which pharmacy educators can integrate with biomedical and pharmaceutical sciences to enhance student learning in the P1 year. Treatment of Human Subjects: IRB exemption granted   Type: Note License: CC BY

  7. A peptide mimetic targeting trans-homophilic NCAM binding sites promotes spatial learning and neural plasticity in the hippocampus

    DEFF Research Database (Denmark)

    Kraev, Igor; Henneberger, Christian; Rossetti, Clara

    2011-01-01

    The key roles played by the neural cell adhesion molecule (NCAM) in plasticity and cognition underscore this membrane protein as a relevant target to develop cognitive-enhancing drugs. However, NCAM is a structurally and functionally complex molecule with multiple domains engaged in a variety of ...

  8. A machine learning model with human cognitive biases capable of learning from small and biased datasets.

    Science.gov (United States)

    Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro

    2018-05-09

    Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.

  9. Artificial intelligence expert systems with neural network machine learning may assist decision-making for extractions in orthodontic treatment planning.

    Science.gov (United States)

    Takada, Kenji

    2016-09-01

    New approach for the diagnosis of extractions with neural network machine learning. Seok-Ki Jung and Tae-Woo Kim. Am J Orthod Dentofacial Orthop 2016;149:127-33. Not reported. Mathematical modeling. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Ideomotor feedback control in a recurrent neural network.

    Science.gov (United States)

    Galtier, Mathieu

    2015-06-01

    The architecture of a neural network controlling an unknown environment is presented. It is based on a randomly connected recurrent neural network from which both perception and action are simultaneously read and fed back. There are two concurrent learning rules implementing a sort of ideomotor control: (i) perception is learned along the principle that the network should predict reliably its incoming stimuli; (ii) action is learned along the principle that the prediction of the network should match a target time series. The coherent behavior of the neural network in its environment is a consequence of the interaction between the two principles. Numerical simulations show a promising performance of the approach, which can be turned into a local and better "biologically plausible" algorithm.

  11. Feature Selection Methods for Zero-Shot Learning of Neural Activity

    Directory of Open Access Journals (Sweden)

    Carlos A. Caceres

    2017-06-01

    Full Text Available Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.

  12. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    Science.gov (United States)

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  13. DeepX: Deep Learning Accelerator for Restricted Boltzmann Machine Artificial Neural Networks.

    Science.gov (United States)

    Kim, Lok-Won

    2018-05-01

    Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).

  14. A study of reactor monitoring method with neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nabeshima, Kunihiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The purpose of this study is to investigate the methodology of Nuclear Power Plant (NPP) monitoring with neural networks, which create the plant models by the learning of the past normal operation patterns. The concept of this method is to detect the symptom of small anomalies by monitoring the deviations between the process signals measured from an actual plant and corresponding output signals from the neural network model, which might not be equal if the abnormal operational patterns are presented to the input of the neural network. Auto-associative network, which has same output as inputs, can detect an kind of anomaly condition by using normal operation data only. The monitoring tests of the feedforward neural network with adaptive learning were performed using the PWR plant simulator by which many kinds of anomaly conditions can be easily simulated. The adaptively trained feedforward network could follow the actual plant dynamics and the changes of plant condition, and then find most of the anomalies much earlier than the conventional alarm system during steady state and transient operations. Then the off-line and on-line test results during one year operation at the actual NPP (PWR) showed that the neural network could detect several small anomalies which the operators or the conventional alarm system didn't noticed. Furthermore, the sensitivity analysis suggests that the plant models by neural networks are appropriate. Finally, the simulation results show that the recurrent neural network with feedback connections could successfully model the slow behavior of the reactor dynamics without adaptive learning. Therefore, the recurrent neural network with adaptive learning will be the best choice for the actual reactor monitoring system. (author)

  15. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  16. Active Neural Localization

    OpenAIRE

    Chaplot, Devendra Singh; Parisotto, Emilio; Salakhutdinov, Ruslan

    2018-01-01

    Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose "Active Neural Localizer", a fully differentiable neural network that learns to localize accurately and efficiently. The proposed model incorporates ideas of tradition...

  17. Improved Neural Signal Classification in a Rapid Serial Visual Presentation Task Using Active Learning.

    Science.gov (United States)

    Marathe, Amar R; Lawhern, Vernon J; Wu, Dongrui; Slayback, David; Lance, Brent J

    2016-03-01

    The application space for brain-computer interface (BCI) technologies is rapidly expanding with improvements in technology. However, most real-time BCIs require extensive individualized calibration prior to use, and systems often have to be recalibrated to account for changes in the neural signals due to a variety of factors including changes in human state, the surrounding environment, and task conditions. Novel approaches to reduce calibration time or effort will dramatically improve the usability of BCI systems. Active Learning (AL) is an iterative semi-supervised learning technique for learning in situations in which data may be abundant, but labels for the data are difficult or expensive to obtain. In this paper, we apply AL to a simulated BCI system for target identification using data from a rapid serial visual presentation (RSVP) paradigm to minimize the amount of training samples needed to initially calibrate a neural classifier. Our results show AL can produce similar overall classification accuracy with significantly less labeled data (in some cases less than 20%) when compared to alternative calibration approaches. In fact, AL classification performance matches performance of 10-fold cross-validation (CV) in over 70% of subjects when training with less than 50% of the data. To our knowledge, this is the first work to demonstrate the use of AL for offline electroencephalography (EEG) calibration in a simulated BCI paradigm. While AL itself is not often amenable for use in real-time systems, this work opens the door to alternative AL-like systems that are more amenable for BCI applications and thus enables future efforts for developing highly adaptive BCI systems.

  18. Neural robust stabilization via event-triggering mechanism and adaptive learning technique.

    Science.gov (United States)

    Wang, Ding; Liu, Derong

    2018-06-01

    The robust control synthesis of continuous-time nonlinear systems with uncertain term is investigated via event-triggering mechanism and adaptive critic learning technique. We mainly focus on combining the event-triggering mechanism with adaptive critic designs, so as to solve the nonlinear robust control problem. This can not only make better use of computation and communication resources, but also conduct controller design from the view of intelligent optimization. Through theoretical analysis, the nonlinear robust stabilization can be achieved by obtaining an event-triggered optimal control law of the nominal system with a newly defined cost function and a certain triggering condition. The adaptive critic technique is employed to facilitate the event-triggered control design, where a neural network is introduced as an approximator of the learning phase. The performance of the event-triggered robust control scheme is validated via simulation studies and comparisons. The present method extends the application domain of both event-triggered control and adaptive critic control to nonlinear systems possessing dynamical uncertainties. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Incorporating deep learning with convolutional neural networks and position specific scoring matrices for identifying electron transport proteins.

    Science.gov (United States)

    Le, Nguyen-Quoc-Khanh; Ho, Quang-Thai; Ou, Yu-Yen

    2017-09-05

    In several years, deep learning is a modern machine learning technique using in a variety of fields with state-of-the-art performance. Therefore, utilization of deep learning to enhance performance is also an important solution for current bioinformatics field. In this study, we try to use deep learning via convolutional neural networks and position specific scoring matrices to identify electron transport proteins, which is an important molecular function in transmembrane proteins. Our deep learning method can approach a precise model for identifying of electron transport proteins with achieved sensitivity of 80.3%, specificity of 94.4%, and accuracy of 92.3%, with MCC of 0.71 for independent dataset. The proposed technique can serve as a powerful tool for identifying electron transport proteins and can help biologists understand the function of the electron transport proteins. Moreover, this study provides a basis for further research that can enrich a field of applying deep learning in bioinformatics. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Panda, Priyadarshini; Roy, Kaushik

    2017-01-01

    Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations.

  1. Promoting system-level learning from project-level lessons

    Energy Technology Data Exchange (ETDEWEB)

    Jong, Amos A. de, E-mail: amosdejong@gmail.com [Innovation Management, Utrecht (Netherlands); Runhaar, Hens A.C., E-mail: h.a.c.runhaar@uu.nl [Section of Environmental Governance, Utrecht University, Utrecht (Netherlands); Runhaar, Piety R., E-mail: piety.runhaar@wur.nl [Organisational Psychology and Human Resource Development, University of Twente, Enschede (Netherlands); Kolhoff, Arend J., E-mail: Akolhoff@eia.nl [The Netherlands Commission for Environmental Assessment, Utrecht (Netherlands); Driessen, Peter P.J., E-mail: p.driessen@geo.uu.nl [Department of Innovation and Environment Sciences, Utrecht University, Utrecht (Netherlands)

    2012-02-15

    A growing number of low and middle income nations (LMCs) have adopted some sort of system for environmental impact assessment (EIA). However, generally many of these EIA systems are characterised by a low performance in terms of timely information dissemination, monitoring and enforcement after licencing. Donor actors (such as the World Bank) have attempted to contribute to a higher performance of EIA systems in LMCs by intervening at two levels: the project level (e.g. by providing scoping advice or EIS quality review) and the system level (e.g. by advising on EIA legislation or by capacity building). The aims of these interventions are environmental protection in concrete cases and enforcing the institutionalisation of environmental protection, respectively. Learning by actors involved is an important condition for realising these aims. A relatively underexplored form of learning concerns learning at EIA system-level via project level donor interventions. This 'indirect' learning potentially results in system changes that better fit the specific context(s) and hence contribute to higher performances. Our exploratory research in Ghana and the Maldives shows that thus far, 'indirect' learning only occurs incidentally and that donors play a modest role in promoting it. Barriers to indirect learning are related to the institutional context rather than to individual characteristics. Moreover, 'indirect' learning seems to flourish best in large projects where donors achieved a position of influence that they can use to evoke reflection upon system malfunctions. In order to enhance learning at all levels donors should thereby present the outcomes of the intervention elaborately (i.e. discuss the outcomes with a large audience), include practical suggestions about post-EIS activities such as monitoring procedures and enforcement options and stimulate the use of their advisory reports to generate organisational memory and ensure a better

  2. Promoting system-level learning from project-level lessons

    International Nuclear Information System (INIS)

    Jong, Amos A. de; Runhaar, Hens A.C.; Runhaar, Piety R.; Kolhoff, Arend J.; Driessen, Peter P.J.

    2012-01-01

    A growing number of low and middle income nations (LMCs) have adopted some sort of system for environmental impact assessment (EIA). However, generally many of these EIA systems are characterised by a low performance in terms of timely information dissemination, monitoring and enforcement after licencing. Donor actors (such as the World Bank) have attempted to contribute to a higher performance of EIA systems in LMCs by intervening at two levels: the project level (e.g. by providing scoping advice or EIS quality review) and the system level (e.g. by advising on EIA legislation or by capacity building). The aims of these interventions are environmental protection in concrete cases and enforcing the institutionalisation of environmental protection, respectively. Learning by actors involved is an important condition for realising these aims. A relatively underexplored form of learning concerns learning at EIA system-level via project level donor interventions. This ‘indirect’ learning potentially results in system changes that better fit the specific context(s) and hence contribute to higher performances. Our exploratory research in Ghana and the Maldives shows that thus far, ‘indirect’ learning only occurs incidentally and that donors play a modest role in promoting it. Barriers to indirect learning are related to the institutional context rather than to individual characteristics. Moreover, ‘indirect’ learning seems to flourish best in large projects where donors achieved a position of influence that they can use to evoke reflection upon system malfunctions. In order to enhance learning at all levels donors should thereby present the outcomes of the intervention elaborately (i.e. discuss the outcomes with a large audience), include practical suggestions about post-EIS activities such as monitoring procedures and enforcement options and stimulate the use of their advisory reports to generate organisational memory and ensure a better information

  3. Supervised learning in spiking neural networks with FORCE training.

    Science.gov (United States)

    Nicola, Wilten; Clopath, Claudia

    2017-12-20

    Populations of neurons display an extraordinary diversity in the behaviors they affect and display. Machine learning techniques have recently emerged that allow us to create networks of model neurons that display behaviors of similar complexity. Here we demonstrate the direct applicability of one such technique, the FORCE method, to spiking neural networks. We train these networks to mimic dynamical systems, classify inputs, and store discrete sequences that correspond to the notes of a song. Finally, we use FORCE training to create two biologically motivated model circuits. One is inspired by the zebra finch and successfully reproduces songbird singing. The second network is motivated by the hippocampus and is trained to store and replay a movie scene. FORCE trained networks reproduce behaviors comparable in complexity to their inspired circuits and yield information not easily obtainable with other techniques, such as behavioral responses to pharmacological manipulations and spike timing statistics.

  4. Evaluating a Gender Diversity Workshop to Promote Positive Learning Environments

    Science.gov (United States)

    Burford, James; Lucassen, Mathijs F. G.; Hamilton, Thomas

    2017-01-01

    Drawing on data from an Aotearoa/New Zealand study of more than 230 secondary students, this article evaluates the potential of a 60-min gender diversity workshop to address bullying and promote positive environments for learning. Students completed pre- and postworkshop questionnaires. The authors used descriptive statistics to summarize results…

  5. Role of Mobile Technology in Promoting Campus-Wide Learning Environment

    Science.gov (United States)

    Hussain, Irshad; Adeeb, Muhammad Aslam

    2009-01-01

    The present study examines the role of mobile technology in promoting campus-wide learning environment. Its main objectives were to a) evaluate the role of mobile technology in higher education in terms of its i). appropriateness ii). flexibility iii). Interactivity, & iv). availability & usefulness and to b). identify the problems of…

  6. Sonic Hedgehog promotes the survival of neural crest cells by limiting apoptosis induced by the dependence receptor CDON during branchial arch development.

    Science.gov (United States)

    Delloye-Bourgeois, Céline; Rama, Nicolas; Brito, José; Le Douarin, Nicole; Mehlen, Patrick

    2014-09-26

    Cell-adhesion molecule-related/Downregulated by Oncogenes (CDO or CDON) was identified as a receptor for the classic morphogen Sonic Hedgehog (SHH). It has been shown that, in cell culture, CDO also behaves as a SHH dependence receptor: CDO actively triggers apoptosis in absence of SHH via a proteolytic cleavage in CDO intracellular domain. We present evidence that CDO is also pro-apoptotic in the developing neural tube where SHH is known to act as a survival factor. SHH, produced by the ventral foregut endoderm, was shown to promote survival of facial neural crest cells (NCCs) that colonize the first branchial arch (BA1). We show here that the survival activity of SHH on neural crest cells is due to SHH-mediated inhibition of CDO pro-apoptotic activity. Silencing of CDO rescued NCCs from apoptosis observed upon SHH inhibition in the ventral foregut endoderm. Thus, the pair SHH/dependence receptor CDO may play an important role in neural crest cell survival during the formation of the first branchial arch. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. CAPES: Unsupervised Storage Performance Tuning Using Neural Network-Based Deep Reinforcement Learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Parameter tuning is an important task of storage performance optimization. Current practice usually involves numerous tweak-benchmark cycles that are slow and costly. To address this issue, we developed CAPES, a model-less deep reinforcement learning-based unsupervised parameter tuning system driven by a deep neural network (DNN). It is designed to nd the optimal values of tunable parameters in computer systems, from a simple client-server system to a large data center, where human tuning can be costly and often cannot achieve optimal performance. CAPES takes periodic measurements of a target computer system’s state, and trains a DNN which uses Q-learning to suggest changes to the system’s current parameter values. CAPES is minimally intrusive, and can be deployed into a production system to collect training data and suggest tuning actions during the system’s daily operation. Evaluation of a prototype on a Lustre system demonstrates an increase in I/O throughput up to 45% at saturation point. About the...

  8. End-to-End Deep Neural Networks and Transfer Learning for Automatic Analysis of Nation-State Malware

    Directory of Open Access Journals (Sweden)

    Ishai Rosenberg

    2018-05-01

    Full Text Available Malware allegedly developed by nation-states, also known as advanced persistent threats (APT, are becoming more common. The task of attributing an APT to a specific nation-state or classifying it to the correct APT family is challenging for several reasons. First, each nation-state has more than a single cyber unit that develops such malware, rendering traditional authorship attribution algorithms useless. Furthermore, the dataset of such available APTs is still extremely small. Finally, those APTs use state-of-the-art evasion techniques, making feature extraction challenging. In this paper, we use a deep neural network (DNN as a classifier for nation-state APT attribution. We record the dynamic behavior of the APT when run in a sandbox and use it as raw input for the neural network, allowing the DNN to learn high level feature abstractions of the APTs itself. We also use the same raw features for APT family classification. Finally, we use the feature abstractions learned by the APT family classifier to solve the attribution problem. Using a test set of 1000 Chinese and Russian developed APTs, we achieved an accuracy rate of 98.6%

  9. Motivational orientation modulates the neural response to reward.

    Science.gov (United States)

    Linke, Julia; Kirsch, Peter; King, Andrea V; Gass, Achim; Hennerici, Michael G; Bongers, André; Wessa, Michèle

    2010-02-01

    Motivational orientation defines the source of motivation for an individual to perform a particular action and can either originate from internal desires (e.g., interest) or external compensation (e.g., money). To this end, motivational orientation should influence the way positive or negative feedback is processed during learning situations and this might in turn have an impact on the learning process. In the present study, we thus investigated whether motivational orientation, i.e., extrinsic and intrinsic motivation modulates the neural response to reward and punishment as well as learning from reward and punishment in 33 healthy individuals. To assess neural responses to reward, punishment and learning of reward contingencies we employed a probabilistic reversal learning task during functional magnetic resonance imaging. Extrinsic and intrinsic motivation were assessed with a self-report questionnaire. Rewarding trials fostered activation in the medial orbitofrontal cortex and anterior cingulate gyrus (ACC) as well as the amygdala and nucleus accumbens, whereas for punishment an increased neural response was observed in the medial and inferior prefrontal cortex, the superior parietal cortex and the insula. High extrinsic motivation was positively correlated to increased neural responses to reward in the ACC, amygdala and putamen, whereas a negative relationship between intrinsic motivation and brain activation in these brain regions was observed. These findings show that motivational orientation indeed modulates the responsiveness to reward delivery in major components of the human reward system and therefore extends previous results showing a significant influence of individual differences in reward-related personality traits on the neural processing of reward. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  10. Error amplification to promote motor learning and motivation in therapy robotics.

    Science.gov (United States)

    Shirzad, Navid; Van der Loos, H F Machiel

    2012-01-01

    To study the effects of different feedback error amplification methods on a subject's upper-limb motor learning and affect during a point-to-point reaching exercise, we developed a real-time controller for a robotic manipulandum. The reaching environment was visually distorted by implementing a thirty degrees rotation between the coordinate systems of the robot's end-effector and the visual display. Feedback error amplification was provided to subjects as they trained to learn reaching within the visually rotated environment. Error amplification was provided either visually or through both haptic and visual means, each method with two different amplification gains. Subjects' performance (i.e., trajectory error) and self-reports to a questionnaire were used to study the speed and amount of adaptation promoted by each error amplification method and subjects' emotional changes. We found that providing haptic and visual feedback promotes faster adaptation to the distortion and increases subjects' satisfaction with the task, leading to a higher level of attentiveness during the exercise. This finding can be used to design a novel exercise regimen, where alternating between error amplification methods is used to both increase a subject's motor learning and maintain a minimum level of motivational engagement in the exercise. In future experiments, we will test whether such exercise methods will lead to a faster learning time and greater motivation to pursue a therapy exercise regimen.

  11. Issues in the use of neural networks in information retrieval

    CERN Document Server

    Iatan, Iuliana F

    2017-01-01

    This book highlights the ability of neural networks (NNs) to be excellent pattern matchers and their importance in information retrieval (IR), which is based on index term matching. The book defines a new NN-based method for learning image similarity and describes how to use fuzzy Gaussian neural networks to predict personality. It introduces the fuzzy Clifford Gaussian network, and two concurrent neural models: (1) concurrent fuzzy nonlinear perceptron modules, and (2) concurrent fuzzy Gaussian neural network modules. Furthermore, it explains the design of a new model of fuzzy nonlinear perceptron based on alpha level sets and describes a recurrent fuzzy neural network model with a learning algorithm based on the improved particle swarm optimization method.

  12. Parallelization of learning problems by artificial neural networks. Application in external radiotherapy; Parallelisation de problemes d'apprentissage par des reseaux neuronaux artificiels. Application en radiotherapie externe

    Energy Technology Data Exchange (ETDEWEB)

    Sauget, M

    2007-12-15

    This research is about the application of neural networks used in the external radiotherapy domain. The goal is to elaborate a new evaluating system for the radiation dose distributions in heterogeneous environments. The al objective of this work is to build a complete tool kit to evaluate the optimal treatment planning. My st research point is about the conception of an incremental learning algorithm. The interest of my work is to combine different optimizations specialized in the function interpolation and to propose a new algorithm allowing to change the neural network architecture during the learning phase. This algorithm allows to minimise the al size of the neural network while keeping a good accuracy. The second part of my research is to parallelize the previous incremental learning algorithm. The goal of that work is to increase the speed of the learning step as well as the size of the learned dataset needed in a clinical case. For that, our incremental learning algorithm presents an original data decomposition with overlapping, together with a fault tolerance mechanism. My last research point is about a fast and accurate algorithm computing the radiation dose deposit in any heterogeneous environment. At the present time, the existing solutions used are not optimal. The fast solution are not accurate and do not give an optimal treatment planning. On the other hand, the accurate solutions are far too slow to be used in a clinical context. Our algorithm answers to this problem by bringing rapidity and accuracy. The concept is to use a neural network adequately learned together with a mechanism taking into account the environment changes. The advantages of this algorithm is to avoid the use of a complex physical code while keeping a good accuracy and reasonable computation times. (author)

  13. Assessing the Potential of Mathematics Textbooks to Promote Deep Learning

    Science.gov (United States)

    Shield, Malcolm; Dole, Shelley

    2013-01-01

    Curriculum documents for mathematics emphasise the importance of promoting depth of knowledge rather than shallow coverage of the curriculum. In this paper, we report on a study that explored the analysis of junior secondary mathematics textbooks to assess their potential to assist in teaching and learning aimed at building and applying deep…

  14. D.E.E.P. Learning: Promoting Informal STEM Learning through a Popular Gaming Platform

    Science.gov (United States)

    Simms, E.; Rohrlick, D.; Layman, C.; Peach, C. L.; Orcutt, J. A.

    2011-12-01

    The research and development of educational games, and the study of the educational value of interactive games in general, have lagged far behind efforts for games created for the purpose of entertainment. But evidence suggests that digital simulations and games have the "potential to advance multiple science learning goals, including motivation to learn science, conceptual understanding, science process skills, understanding of the nature of science, scientific discourse and argumentation, and identification with science and science learning." (NRC, 2011). It is also generally recognized that interactive digital games have the potential to promote the development of valuable learning and life skills, including data processing, decision-making, critical thinking, planning, communication and collaboration (Kirriemuir and MacFarlane, 2006). Video games are now played in 67% of American households (ESA, 2010), and across a broad range of ages, making them a potentially valuable tool for Science, Technology, Engineering and Mathematics (STEM) learning among the diverse audiences associated with informal science education institutions (ISEIs; e.g., aquariums, museums, science centers). We are attempting to capitalize on this potential by developing games based on the popular Microsoft Xbox360 gaming platform and the free Microsoft XNA game development kit. The games, collectively known as Deep-sea Extreme Environment Pilot (D.E.E.P.), engage ISEI visitors in the exploration and understanding of the otherwise remote deep-sea environment. Players assume the role of piloting a remotely-operated vehicle (ROV) to explore ocean observing systems and hydrothermal vent environments, and are challenged to complete science-based objectives in order to earn points under timed conditions. The current games are intended to be relatively brief visitor experiences (on the order of several minutes) that support complementary exhibits and programming, and promote interactive visitor

  15. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    Science.gov (United States)

    Nitta, Tohru

    2017-10-01

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  16. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks.

    Science.gov (United States)

    Xia, Peng; Hu, Jie; Peng, Yinghong

    2017-10-25

    A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  17. Development of neural network simulating power distribution of a BWR fuel bundle

    International Nuclear Information System (INIS)

    Tanabe, A.; Yamamoto, T.; Shinfuku, K.; Nakamae, T.

    1992-01-01

    A neural network model is developed to simulate the precise nuclear physics analysis program code for quick scoping survey calculations. The relation between enrichment and local power distribution of BWR fuel bundles was learned using two layers neural network (ENET). A new model is to introduce burnable neutron absorber (Gadolinia), added to several fuel rods to decrease initial reactivity of fresh bundle. The 2nd stages three layers neural network (GNET) is added on the 1st stage network ENET. GNET studies the local distribution difference caused by Gadolinia. Using this method, it becomes possible to survey of the gradients of sigmoid functions and back propagation constants with reasonable time. Using 99 learning patterns of zero burnup, good error convergence curve is obtained after many trials. This neural network model is able to simulate no learned cases fairly as well as the learned cases. Computer time of this neural network model is about 100 times faster than a precise analysis model. (author)

  18. Age-related difference in the effective neural connectivity associated with probabilistic category learning

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Eun Jin; Cho, Sang Soo; Kim, Hee Jung; Bang, Seong Ae; Park, Hyun Soo; Kim, Yu Kyeong; Kim, Sang Eun [Seoul National Univ. College of Medicine, Seoul (Korea, Republic of)

    2007-07-01

    Although it is well known that explicit memory is affected by the deleterious changes in brain with aging, but effect of aging in implicit memory such as probabilistic category learning (PCL) is not clear. To identify the effect of aging on the neural interaction for successful PCL, we investigated the neural substrates of PCL and the age-related changes of the neural network between these brain regions. 23 young (age, 252 y; 11 males) and 14 elderly (673 y; 7 males) healthy subjects underwent FDG PET during a resting state and 150-trial weather prediction (WP) task. Correlations between the WP hit rates and regional glucose metabolism were assessed using SPM2 (P<0.05 uncorrected). For path analysis, seven brain regions (bilateral middle frontal gyri and putamen, left fusiform gyrus, anterior cingulate and right parahippocampal gyri) were selected based on the results of the correlation analysis. Model construction and path analysis processing were done by AMOS 5.0. The elderly had significantly lower total hit rates than the young (P<0.005). In the correlation analysis, both groups showed similar metabolic correlation in frontal and striatal area. But correlation in the medial temporal lobe (MTL) was found differently by group. In path analysis, the functional networks for the constructed model was accepted (X(2) =0.80, P=0.67) and it proved to be significantly different between groups (X{sub diff}(37) = 142.47, P<0.005), Systematic comparisons of each path revealed that frontal crosscallosal and the frontal to parahippocampal connection were most responsible for the model differences (P<0.05). For the successful PCL, the elderly recruits the basal ganglia implicit memory system but MTL recruitment differs from the young. The inadequate MTL correlation pattern in the elderly is may be caused by the changes of the neural pathway related with explicit memory. These neural changes can explain the decreased performance of PCL in elderly subjects.

  19. A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning.

    Science.gov (United States)

    Kappel, David; Legenstein, Robert; Habenschuss, Stefan; Hsieh, Michael; Maass, Wolfgang

    2018-01-01

    Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.

  20. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  1. Lack of promoter IV-driven BDNF transcription results in depression-like behavior.

    Science.gov (United States)

    Sakata, K; Jin, L; Jha, S

    2010-10-01

    Transcription of Bdnf is controlled by multiple promoters, in which promoter IV contributes significantly to activity-dependent Bdnf transcription. We have generated promoter IV mutant mice [brain-derived neurotrophic factor (BDNF)-KIV] in which promoter IV-driven expression of BDNF is selectively disrupted by inserting a green fluorescent protein (GFP)-STOP cassette within the Bdnf exon IV locus. BDNF-KIV animals exhibited depression-like behavior as shown by the tail suspension test (TST), sucrose preference test (SPT) and learned helplessness test (LHT). In addition, BDNF-KIV mice showed reduced activity in the open field test (OFT) and reduced food intake in the novelty-suppressed feeding test (NSFT). The mutant mice did not display anxiety-like behavior in the light and dark box test and elevated plus maze tests. Interestingly, the mutant mice showed defective response inhibition in the passive avoidance test (PAT) even though their learning ability was intact when measured with the active avoidance test (AAT). These results suggest that promoter IV-dependent BDNF expression plays a critical role in the control of mood-related behaviors. This is the first study that directly addressed the effects of endogenous promoter-driven expression of BDNF in depression-like behavior. © 2010 The Authors. Genes, Brain and Behavior © 2010 Blackwell Publishing Ltd and International Behavioural and Neural Genetics Society.

  2. Recruitment and Consolidation of Cell Assemblies for Words by Way of Hebbian Learning and Competition in a Multi-Layer Neural Network.

    Science.gov (United States)

    Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann

    2009-06-01

    Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly's halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support.

  3. Conceptual Tutoring Software for Promoting Deep Learning: A Case Study

    Science.gov (United States)

    Stott, Angela; Hattingh, Annemarie

    2015-01-01

    The paper presents a case study of the use of conceptual tutoring software to promote deep learning of the scientific concept of density among 50 final year pre-service student teachers in a natural sciences course in a South African university. Individually-paced electronic tutoring is potentially an effective way of meeting the students' varied…

  4. Generation of artificial accelerograms using neural networks for data of Iran

    International Nuclear Information System (INIS)

    Bargi, Kh.; Loux, C.; Rohani, H.

    2002-01-01

    A new method for generation of artificial earthquake accelerograms from response spectra is proposed by Ghaboussi and Lin in 1997 using neural networks. In this paper the methodology has been extended and enhanced for data of Iran. For this purpose, first 40 records of Iran acceleration is chosen, then an RBF neural network which called generalized regression neural network learn the inverse mapping directly from the response spectrum to the Discrete Cosine Transform of accelerograms. Discrete Cosine Transform has been used as an assisting device to extract the content of frequency domain. Learning of network is reasonable and a generalized regression neural network learns it in a few second. Outputs are presented to demonstrate the performance of this method and show its capabilities

  5. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  6. A Deep Learning Algorithm of Neural Network for the Parameterization of Typhoon-Ocean Feedback in Typhoon Forecast Models

    Science.gov (United States)

    Jiang, Guo-Qing; Xu, Jing; Wei, Jun

    2018-04-01

    Two algorithms based on machine learning neural networks are proposed—the shallow learning (S-L) and deep learning (D-L) algorithms—that can potentially be used in atmosphere-only typhoon forecast models to provide flow-dependent typhoon-induced sea surface temperature cooling (SSTC) for improving typhoon predictions. The major challenge of existing SSTC algorithms in forecast models is how to accurately predict SSTC induced by an upcoming typhoon, which requires information not only from historical data but more importantly also from the target typhoon itself. The S-L algorithm composes of a single layer of neurons with mixed atmospheric and oceanic factors. Such a structure is found to be unable to represent correctly the physical typhoon-ocean interaction. It tends to produce an unstable SSTC distribution, for which any perturbations may lead to changes in both SSTC pattern and strength. The D-L algorithm extends the neural network to a 4 × 5 neuron matrix with atmospheric and oceanic factors being separated in different layers of neurons, so that the machine learning can determine the roles of atmospheric and oceanic factors in shaping the SSTC. Therefore, it produces a stable crescent-shaped SSTC distribution, with its large-scale pattern determined mainly by atmospheric factors (e.g., winds) and small-scale features by oceanic factors (e.g., eddies). Sensitivity experiments reveal that the D-L algorithms improve maximum wind intensity errors by 60-70% for four case study simulations, compared to their atmosphere-only model runs.

  7. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan; Naous, Rawan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.

    2015-01-01

    . Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards

  8. Promotion of critical thinking in e-learning: a qualitative study on the experiences of instructors and students

    Science.gov (United States)

    Gharib, Mitra; Zolfaghari, Mitra; Mojtahedzadeh, Rita; Mohammadi, Aeen; Gharib, Atoosa

    2016-01-01

    Background With the increasing popularity of e-learning programs, educational stakeholders are attempting to promote critical thinking in the virtual education system. This study aimed to explore the experiences of both the instructors and the students about critical thinking promotion within the virtual education system. Methods This qualitative study recruited the instructors and students from four academic disciplines provided by the Virtual School of Tehran University of Medical Sciences (Tehran, Iran). All programs were master’s degree programs and utilized a blended (combination of e-learning and face to face) training. Semistructured interviews with the participants were used to collect data. Results The participants had a variety of experiences about how to promote critical thinking. These experiences were conceptualized in four main themes, namely, instructional design, educational leadership and management, local evidence, and belief systems. Conclusion The present study clarified the factors affecting critical thinking promotion in e-learning. Not only the instructors but also the educational designers and leaders can benefit from our findings to improve the quality of virtual education programs and promote critical thinking. PMID:27217807

  9. Promotion of critical thinking in e-learning: a qualitative study on the experiences of instructors and students.

    Science.gov (United States)

    Gharib, Mitra; Zolfaghari, Mitra; Mojtahedzadeh, Rita; Mohammadi, Aeen; Gharib, Atoosa

    2016-01-01

    With the increasing popularity of e-learning programs, educational stakeholders are attempting to promote critical thinking in the virtual education system. This study aimed to explore the experiences of both the instructors and the students about critical thinking promotion within the virtual education system. This qualitative study recruited the instructors and students from four academic disciplines provided by the Virtual School of Tehran University of Medical Sciences (Tehran, Iran). All programs were master's degree programs and utilized a blended (combination of e-learning and face to face) training. Semistructured interviews with the participants were used to collect data. The participants had a variety of experiences about how to promote critical thinking. These experiences were conceptualized in four main themes, namely, instructional design, educational leadership and management, local evidence, and belief systems. The present study clarified the factors affecting critical thinking promotion in e-learning. Not only the instructors but also the educational designers and leaders can benefit from our findings to improve the quality of virtual education programs and promote critical thinking.

  10. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  11. Identification-based chaos control via backstepping design using self-organizing fuzzy neural networks

    International Nuclear Information System (INIS)

    Peng Yafu; Hsu, C.-F.

    2009-01-01

    This paper proposes an identification-based adaptive backstepping control (IABC) for the chaotic systems. The IABC system is comprised of a neural backstepping controller and a robust compensation controller. The neural backstepping controller containing a self-organizing fuzzy neural network (SOFNN) identifier is the principal controller, and the robust compensation controller is designed to dispel the effect of minimum approximation error introduced by the SOFNN identifier. The SOFNN identifier is used to online estimate the chaotic dynamic function with structure and parameter learning phases of fuzzy neural network. The structure learning phase consists of the growing and pruning of fuzzy rules; thus the SOFNN identifier can avoid the time-consuming trial-and-error tuning procedure for determining the neural structure of fuzzy neural network. The parameter learning phase adjusts the interconnection weights of neural network to achieve favorable approximation performance. Finally, simulation results verify that the proposed IABC can achieve favorable tracking performance.

  12. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  13. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  14. Cooperating attackers in neural cryptography.

    Science.gov (United States)

    Shacham, Lanir N; Klein, Einat; Mislovaty, Rachel; Kanter, Ido; Kinzel, Wolfgang

    2004-06-01

    A successful attack strategy in neural cryptography is presented. The neural cryptosystem, based on synchronization of neural networks by mutual learning, has been recently shown to be secure under different attack strategies. The success of the advanced attacker presented here, called the "majority-flipping attacker," does not decay with the parameters of the model. This attacker's outstanding success is due to its using a group of attackers which cooperate throughout the synchronization process, unlike any other attack strategy known. An analytical description of this attack is also presented, and fits the results of simulations.

  15. A personal connection: Promoting positive attitudes towards teaching and learning.

    Science.gov (United States)

    Lujan, Heidi L; DiCarlo, Stephen E

    2017-09-01

    Students' attitudes towards teaching and learning must be addressed with the same seriousness and effort as we address content. Establishing a personal connection and addressing our students' basic psychological needs will produce positive attitudes towards teaching and learning and develop life-long learners. It will also promote constructive student-teacher relationships that have a profound influence on our students' approach towards school. To begin this process, consider the major tenets of the Self-Determination Theory. The Self-Determination Theory of human motivation focuses on our students' innate psychological needs and the degree to which an individual's behavior is self-motivated and self-determined. Faculty can satisfy the innate psychological needs by addressing our students' desire for relatedness, competence and autonomy. Relatedness refers to our students' need to feel connected to others, to be a member of a group, to have a sense of communion and to develop close relationships with others. Competence is believing our students can succeed , challenging them to do so and imparting that belief in them. Autonomy involves considering the perspectives of the student and providing relevant information and opportunities for student choice and initiating and regulating their own behaviors. Establishing a personal connection and addressing our students' basic psychological needs will improve our teaching, inspire and engage our students and promote positive attitudes towards teaching and learning while reducing competition and increasing compassion. These are important goals because unless students are inspired and motivated and have positive attitudes towards teaching and learning our efforts will fail to meet their full potential. Anat Sci Educ 10: 503-507. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  16. Memory in Neural Networks and Glasses

    NARCIS (Netherlands)

    Heerema, M.

    2000-01-01

    The thesis tries and models a neural network in a way which, at essential points, is biologically realistic. In a biological context, the changes of the synapses of the neural network are most often described by what is called `Hebb's learning rule'. On careful analysis it is, in fact, nothing but a

  17. PROMOTING INCIDENTAL VOCABULARY LEARNING THROUGH VERBAL DRAMATIZATION OF WORDS

    Directory of Open Access Journals (Sweden)

    Looi-Chin Ch’ng

    2014-12-01

    Full Text Available Despite the fact that explicit teaching of vocabulary is often practised in English as a Second Language (ESL classrooms, it has been proven to be rather ineffective, largely because words are not taught in context. This has prompted the increasing use of incidental vocabulary learning approach, which emphasises on repeated readings as a source for vocabulary learning. By adopting this approach, this study aims to investigate students’ ability in learning vocabulary incidentally via verbal dramatization of written texts. In this case, readers’ theatre (RT is used as a way to allow learners to engage in active reading so as to promote vocabulary learning. A total of 160 diploma students participated in this case study and they were divided equally into two groups, namely classroom reading (CR and RT groups. A proficiency test was first conducted to determine their vocabulary levels. Based on the test results, a story was selected as the reading material in the two groups. The CR group read the story through a normal reading lesson in class while the RT group was required to verbally dramatize the text through readers’ theatre activity. Then, a post-test based on vocabulary levels was carried out and the results were compared. The findings revealed that incidental learning was more apparent in the RT group and their ability to learn words from the higher levels was noticeable through higher accuracy scores. Although not conclusive, this study has demonstrated the potential of using readers’ theatre as a form of incidental vocabulary learning activity in ESL settings.

  18. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Ammar Daskin

    2018-01-01

    The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the pha...

  19. White blood cells identification system based on convolutional deep neural learning networks.

    Science.gov (United States)

    Shahin, A I; Guo, Yanhui; Amin, K M; Sharawi, Amr A

    2017-11-16

    White blood cells (WBCs) differential counting yields valued information about human health and disease. The current developed automated cell morphology equipments perform differential count which is based on blood smear image analysis. Previous identification systems for WBCs consist of successive dependent stages; pre-processing, segmentation, feature extraction, feature selection, and classification. There is a real need to employ deep learning methodologies so that the performance of previous WBCs identification systems can be increased. Classifying small limited datasets through deep learning systems is a major challenge and should be investigated. In this paper, we propose a novel identification system for WBCs based on deep convolutional neural networks. Two methodologies based on transfer learning are followed: transfer learning based on deep activation features and fine-tuning of existed deep networks. Deep acrivation featues are extracted from several pre-trained networks and employed in a traditional identification system. Moreover, a novel end-to-end convolutional deep architecture called "WBCsNet" is proposed and built from scratch. Finally, a limited balanced WBCs dataset classification is performed through the WBCsNet as a pre-trained network. During our experiments, three different public WBCs datasets (2551 images) have been used which contain 5 healthy WBCs types. The overall system accuracy achieved by the proposed WBCsNet is (96.1%) which is more than different transfer learning approaches or even the previous traditional identification system. We also present features visualization for the WBCsNet activation which reflects higher response than the pre-trained activated one. a novel WBCs identification system based on deep learning theory is proposed and a high performance WBCsNet can be employed as a pre-trained network. Copyright © 2017. Published by Elsevier B.V.

  20. Higher-order neural network software for distortion invariant object recognition

    Science.gov (United States)

    Reid, Max B.; Spirkovska, Lilly

    1991-01-01

    The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.

  1. Neutron spectrometry and dosimetry by means of evolutive neural networks

    International Nuclear Information System (INIS)

    Ortiz R, J.M.; Martinez B, M.R.; Vega C, H.R.

    2008-01-01

    The artificial neural networks and the genetic algorithms are two relatively new areas of research, which have been subject to a growing interest during the last years. Both models are inspired by the nature, however, the neural networks are interested in the learning of a single individual, which is defined as fenotypic learning, while the evolutionary algorithms are interested in the adaptation of a population to a changing environment, that which is defined as genotypic learning. Recently, the use of the technology of neural networks has been applied with success in the area of the nuclear sciences, mainly in the areas of neutron spectrometry and dosimetry. The structure (network topology), as well as the learning parameters of a neural network, are factors that contribute in a significant way with the acting of the same one, however, it has been observed that the investigators in this area, carry out the selection of the network parameters through the essay and error technique, that which produces neural networks of poor performance and low generalization capacity. From the revised sources, it has been observed that the use of the evolutionary algorithms, seen as search techniques, it has allowed him to be possible to evolve and to optimize different properties of the neural networks, just as the initialization of the synaptic weights, the network architecture or the training algorithms without the human intervention. The objective of the present work is focused in analyzing the intersection of the neural networks and the evolutionary algorithms, analyzing like it is that the same ones can be used to help in the design processes and training of a neural network, this is, in the good selection of the structural parameters and of network learning, improving its generalization capacity, in such way that the same one is able to reconstruct in an efficient way neutron spectra and to calculate equivalent doses starting from the counting rates of a Bonner sphere

  2. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    Science.gov (United States)

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  3. TRIGA control rod position and reactivity transient Monitoring by Neural Networks

    International Nuclear Information System (INIS)

    Rosa, R.; Palomba, M.; Sepielli, M.

    2008-01-01

    Plant sensors drift or malfunction and operator actions in nuclear reactor control can be supported by sensor on-line monitoring, and data validation through soft-computing process. On-line recalibration can often avoid manual calibration or drifting component replacement. DSP requires prompt response to the modified conditions. Artificial Neural Network (ANN) and Fuzzy logic ensure: prompt response, link with field measurement and physical system behaviour, data incoming interpretation, and detection of discrepancy for mis-calibration or sensor faults. ANN (Artificial Neural Network) is a system based on the operation of biological neural networks. Although computing is day by day advancing, there are certain tasks that a program made for a common microprocessor is unable to perform. A software implementation of an ANN can be made with Pros and Cons. Pros: A neural network can perform tasks that a linear program can not; When an element of the neural network fails, it can continue without any problem by their parallel nature; A neural network learns and does not need to be reprogrammed; It can be implemented in any application; It can be implemented without any problem. Cons: The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated; it requires high processing time for large neural networks; and the neural network needs training to operate. Three possibilities of training exist: Supervised learning: the network is trained providing input and matching output patterns; Unsupervised learning: input patterns are not a priori classified and the system must develop its own representation of the input stimuli; Reinforcement Learning: intermediate form of the above two types of learning, the learning machine does some action on the environment and gets a feedback response from the environment. Two TRIGAN ANN applications are considered: control rod position and fuel temperature. The outcome obtained in this

  4. Trust as commodity: social value orientation affects the neural substrates of learning to cooperate.

    Science.gov (United States)

    Lambert, Bruno; Declerck, Carolyn H; Emonds, Griet; Boone, Christophe

    2017-04-01

    Individuals differ in their motives and strategies to cooperate in social dilemmas. These differences are reflected by an individual's social value orientation: proselfs are strategic and motivated to maximize self-interest, while prosocials are more trusting and value fairness. We hypothesize that when deciding whether or not to cooperate with a random member of a defined group, proselfs, more than prosocials, adapt their decisions based on past experiences: they 'learn' instrumentally to form a base-line expectation of reciprocity. We conducted an fMRI experiment where participants (19 proselfs and 19 prosocials) played 120 sequential prisoner's dilemmas against randomly selected, anonymous and returning partners who cooperated 60% of the time. Results indicate that cooperation levels increased over time, but that the rate of learning was steeper for proselfs than for prosocials. At the neural level, caudate and precuneus activation were more pronounced for proselfs relative to prosocials, indicating a stronger reliance on instrumental learning and self-referencing to update their trust in the cooperative strategy. © The Author (2017). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  5. A novel low-voltage low-power analogue VLSI implementation of neural networks with on-chip back-propagation learning

    Science.gov (United States)

    Carrasco, Manuel; Garde, Andres; Murillo, Pilar; Serrano, Luis

    2005-06-01

    In this paper a novel design and implementation of a VLSI Analogue Neural Net based on Multi-Layer Perceptron (MLP) with on-chip Back Propagation (BP) learning algorithm suitable for the resolution of classification problems is described. In order to implement a general and programmable analogue architecture, the design has been carried out in a hierarchical way. In this way the net has been divided in synapsis-blocks and neuron-blocks providing an easy method for the analysis. These blocks basically consist on simple cells, which are mainly, the activation functions (NAF), derivatives (DNAF), multipliers and weight update circuits. The analogue design is based on current-mode translinear techniques using MOS transistors working in the weak inversion region in order to reduce both the voltage supply and the power consumption. Moreover, with the purpose of minimizing the noise, offset and distortion of even order, the topologies are fully-differential and balanced. The circuit, named ANNE (Analogue Neural NEt), has been prototyped and characterized as a proof of concept on CMOS AMI-0.5A technology occupying a total area of 2.7mm2. The chip includes two versions of neural nets with on-chip BP learning algorithm, which are respectively a 2-1 and a 2-2-1 implementations. The proposed nets have been experimentally tested using supply voltages from 2.5V to 1.8V, which is suitable for single cell lithium-ion battery supply applications. Experimental results of both implementations included in ANNE exhibit a good performance on solving classification problems. These results have been compared with other proposed Analogue VLSI implementations of Neural Nets published in the literature demonstrating that our proposal is very efficient in terms of occupied area and power consumption.

  6. Barrier Function-Based Neural Adaptive Control With Locally Weighted Learning and Finite Neuron Self-Growing Strategy.

    Science.gov (United States)

    Jia, Zi-Jun; Song, Yong-Duan

    2017-06-01

    This paper presents a new approach to construct neural adaptive control for uncertain nonaffine systems. By integrating locally weighted learning with barrier Lyapunov function (BLF), a novel control design method is presented to systematically address the two critical issues in neural network (NN) control field: one is how to fulfill the compact set precondition for NN approximation, and the other is how to use varying rather than a fixed NN structure to improve the functionality of NN control. A BLF is exploited to ensure the NN inputs to remain bounded during the entire system operation. To account for system nonlinearities, a neuron self-growing strategy is proposed to guide the process for adding new neurons to the system, resulting in a self-adjustable NN structure for better learning capabilities. It is shown that the number of neurons needed to accomplish the control task is finite, and better performance can be obtained with less number of neurons as compared with traditional methods. The salient feature of the proposed method also lies in the continuity of the control action everywhere. Furthermore, the resulting control action is smooth almost everywhere except for a few time instants at which new neurons are added. Numerical example illustrates the effectiveness of the proposed approach.

  7. SORN: a self-organizing recurrent neural network

    Directory of Open Access Journals (Sweden)

    Andreea Lazar

    2009-10-01

    Full Text Available Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success.

  8. Improved transformer protection using probabilistic neural network ...

    African Journals Online (AJOL)

    user

    secure and dependable protection for power transformers. Owing to its superior learning and generalization capabilities Artificial. Neural Network (ANN) can considerably enhance the scope of WI method. ANN approach is faster, robust and easier to implement than the conventional waveform approach. The use of neural ...

  9. Improved Extension Neural Network and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2014-01-01

    Full Text Available Extension neural network (ENN is a new neural network that is a combination of extension theory and artificial neural network (ANN. The learning algorithm of ENN is based on supervised learning algorithm. One of important issues in the field of classification and recognition of ENN is how to achieve the best possible classifier with a small number of labeled training data. Training data selection is an effective approach to solve this issue. In this work, in order to improve the supervised learning performance and expand the engineering application range of ENN, we use a novel data selection method based on shadowed sets to refine the training data set of ENN. Firstly, we use clustering algorithm to label the data and induce shadowed sets. Then, in the framework of shadowed sets, the samples located around each cluster centers (core data and the borders between clusters (boundary data are selected as training data. Lastly, we use selected data to train ENN. Compared with traditional ENN, the proposed improved ENN (IENN has a better performance. Moreover, IENN is independent of the supervised learning algorithms and initial labeled data. Experimental results verify the effectiveness and applicability of our proposed work.

  10. Learning to Promote Health at an Emergency Care Department: Identifying Expansive and Restrictive Conditions

    Science.gov (United States)

    Gustavsson, Maria; Ekberg, Kerstin

    2015-01-01

    This article reports on the findings of a planned workplace health promotion intervention, and the aim is to identify conditions that facilitated or restricted the learning to promote health at an emergency care department in a Swedish hospital. The study had a longitudinal design, with interviews before and after the intervention and follow-up…

  11. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  12. Neural network recognition of mammographic lesions

    International Nuclear Information System (INIS)

    Oldham, W.J.B.; Downes, P.T.; Hunter, V.

    1987-01-01

    A method for recognition of mammographic lesions through the use of neural networks is presented. Neural networks have exhibited the ability to learn the shape andinternal structure of patterns. Digitized mammograms containing circumscribed and stelate lesions were used to train a feedfoward synchronous neural network that self-organizes to stable attractor states. Encoding of data for submission to the network was accomplished by performing a fractal analysis of the digitized image. This results in scale invariant representation of the lesions. Results are discussed

  13. Wavelet-enhanced convolutional neural network: a new idea in a deep learning paradigm.

    Science.gov (United States)

    Savareh, Behrouz Alizadeh; Emami, Hassan; Hajiabadi, Mohamadreza; Azimi, Seyed Majid; Ghafoori, Mahyar

    2018-05-29

    Manual brain tumor segmentation is a challenging task that requires the use of machine learning techniques. One of the machine learning techniques that has been given much attention is the convolutional neural network (CNN). The performance of the CNN can be enhanced by combining other data analysis tools such as wavelet transform. In this study, one of the famous implementations of CNN, a fully convolutional network (FCN), was used in brain tumor segmentation and its architecture was enhanced by wavelet transform. In this combination, a wavelet transform was used as a complementary and enhancing tool for CNN in brain tumor segmentation. Comparing the performance of basic FCN architecture against the wavelet-enhanced form revealed a remarkable superiority of enhanced architecture in brain tumor segmentation tasks. Using mathematical functions and enhancing tools such as wavelet transform and other mathematical functions can improve the performance of CNN in any image processing task such as segmentation and classification.

  14. Ferulic acid promotes survival and differentiation of neural stem cells to prevent gentamicin-induced neuronal hearing loss.

    Science.gov (United States)

    Gu, Lintao; Cui, Xinhua; Wei, Wei; Yang, Jia; Li, Xuezhong

    2017-11-15

    Neural stem cells (NSCs) have exhibited promising potential in therapies against neuronal hearing loss. Ferulic acid (FA) has been widely reported to enhance neurogenic differentiation of different stem cells. We investigated the role of FA in promoting NSC transplant therapy to prevent gentamicin-induced neuronal hearing loss. NSCs were isolated from mouse cochlear tissues to establish in vitro culture, which were then treated with FA. The survival and differentiation of NSCs were evaluated. Subsequently, neurite outgrowth and excitability of the in vitro neuronal network were assessed. Gentamicin was used to induce neuronal hearing loss in mice, in the presence and absence of FA, followed by assessments of auditory brainstem response (ABR) and distortion product optoacoustic emissions (DPOAE) amplitude. FA promoted survival, neurosphere formation and differentiation of NSCs, as well as neurite outgrowth and excitability of in vitro neuronal network. Furthermore, FA restored ABR threshold shifts and DPOAE in gentamicin-induced neuronal hearing loss mouse model in vivo. Our data, for the first time, support potential therapeutic efficacy of FA in promoting survival and differentiation of NSCs to prevent gentamicin-induced neuronal hearing loss. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Tensor Basis Neural Network v. 1.0 (beta)

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-28

    This software package can be used to build, train, and test a neural network machine learning model. The neural network architecture is specifically designed to embed tensor invariance properties by enforcing that the model predictions sit on an invariant tensor basis. This neural network architecture can be used in developing constitutive models for applications such as turbulence modeling, materials science, and electromagnetism.

  16. Enhanced differentiation of neural stem cells to neurons and promotion of neurite outgrowth by oxygen-glucose deprivation.

    Science.gov (United States)

    Wang, Qin; Yang, Lin; Wang, Yaping

    2015-06-01

    Stroke has become the leading cause of mortality worldwide. Hypoxic or ischemic insults are crucial factors mediating the neural damage in the brain tissue of stroke patients. Neural stem cells (NSCs) have been recognized as a promising tool for the treatment of ischemic stroke and other neurodegenerative diseases due to their inducible pluripotency. In this study, we aim to mimick the cerebral hypoxic-ischemic injury in vitro using oxygen-glucose deprivation (OGD) strategy, and evaluate the effects of OGD on the NSC's neural differentiation, as well as the differentiated neurite outgrowth. Our data showed that NSCs under the short-term 2h OGD treatment are able to maintain cell viability and the capability to form neurospheres. Importantly, this moderate OGD treatment promotes NSC differentiation to neurons and enhances the performance of the mature neuronal networks, accompanying increased neurite outgrowth of differentiated neurons. However, long-term 6h and 8h OGD exposures in NSCs lead to decreased cell survival, reduced differentiation and diminished NSC-derived neurite outgrowth. The expressions of neuron-specific microtubule-associated protein 2 (MAP-2) and growth associated protein 43 (GAP-43) are increased by short-term OGD treatments but suppressed by long-term OGD. Overall, our results demonstrate that short-term OGD exposure in vitro induces differentiation of NSCs while maintaining their proliferation and survival, providing valuable insights of adopting NSC-based therapy for ischemic stroke and other neurodegenerative disorders. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Daskin, Ammar

    2016-01-01

    The learning process for multi layered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow-Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, this iterative formulas result in terms formed by the principal components of the weight matrix: i.e., the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase...

  18. A Quantum Implementation Model for Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ammar Daskin

    2018-02-01

    Full Text Available The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase estimation algorithm is known to provide speedups over the conventional algorithms for the eigenvalue-related problems. Combining the quantum amplitude amplification with the phase estimation algorithm, a quantum implementation model for artificial neural networks using the Widrow–Hoff learning rule is presented. The complexity of the model is found to be linear in the size of the weight matrix. This provides a quadratic improvement over the classical algorithms. Quanta 2018; 7: 7–18.

  19. The Neural Foundations of Reaction and Action in Aversive Motivation.

    Science.gov (United States)

    Campese, Vincent D; Sears, Robert M; Moscarello, Justin M; Diaz-Mataix, Lorenzo; Cain, Christopher K; LeDoux, Joseph E

    2016-01-01

    Much of the early research in aversive learning concerned motivation and reinforcement in avoidance conditioning and related paradigms. When the field transitioned toward the focus on Pavlovian threat conditioning in isolation, this paved the way for the clear understanding of the psychological principles and neural and molecular mechanisms responsible for this type of learning and memory that has unfolded over recent decades. Currently, avoidance conditioning is being revisited, and with what has been learned about associative aversive learning, rapid progress is being made. We review, below, the literature on the neural substrates critical for learning in instrumental active avoidance tasks and conditioned aversive motivation.

  20. Deep learning for computational chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Goh, Garrett B. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Hodas, Nathan O. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Vishnu, Abhinav [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354

    2017-03-08

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.

  1. Deep learning for computational chemistry.

    Science.gov (United States)

    Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav

    2017-06-15

    The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. The alcoholic brain: neural bases of impaired reward-based decision-making in alcohol use disorders.

    Science.gov (United States)

    Galandra, Caterina; Basso, Gianpaolo; Cappa, Stefano; Canessa, Nicola

    2018-03-01

    Neuroeconomics is providing insights into the neural bases of decision-making in normal and pathological conditions. In the neuropsychiatric domain, this discipline investigates how abnormal functioning of neural systems associated with reward processing and cognitive control promotes different disorders, and whether such evidence may inform treatments. This endeavor is crucial when studying different types of addiction, which share a core promoting mechanism in the imbalance between impulsive subcortical neural signals associated with immediate pleasurable outcomes and inhibitory signals mediated by a prefrontal reflective system. The resulting impairment in behavioral control represents a hallmark of alcohol use disorders (AUDs), a chronic relapsing disorder characterized by excessive alcohol consumption despite devastating consequences. This review aims to summarize available magnetic resonance imaging (MRI) evidence on reward-related decision-making alterations in AUDs, and to envision possible future research directions. We review functional MRI (fMRI) studies using tasks involving monetary rewards, as well as MRI studies relating decision-making parameters to neurostructural gray- or white-matter metrics. The available data suggest that excessive alcohol exposure affects neural signaling within brain networks underlying adaptive behavioral learning via the implementation of prediction errors. Namely, weaker ventromedial prefrontal cortex activity and altered connectivity between ventral striatum and dorsolateral prefrontal cortex likely underpin a shift from goal-directed to habitual actions which, in turn, might underpin compulsive alcohol consumption and relapsing episodes despite adverse consequences. Overall, these data highlight abnormal fronto-striatal connectivity as a candidate neurobiological marker of impaired choice in AUDs. Further studies are needed, however, to unveil its implications in the multiple facets of decision-making.

  3. Nuclear power plant monitoring method by neural network and its application to actual nuclear reactor

    International Nuclear Information System (INIS)

    Nabeshima, Kunihiko; Suzuki, Katsuo; Shinohara, Yoshikuni; Tuerkcan, E.

    1995-11-01

    In this paper, the anomaly detection method for nuclear power plant monitoring and its program are described by using a neural network approach, which is based on the deviation between measured signals and output signals of neural network model. The neural network used in this study has three layered auto-associative network with 12 input/output, and backpropagation algorithm is adopted for learning. Furthermore, to obtain better dynamical model of the reactor plant, a new learning technique was developed in which the learning process of the present neural network is divided into initial and adaptive learning modes. The test results at the actual nuclear reactor shows that the neural network plant monitoring system is successfull in detecting in real-time the symptom of small anomaly over a wide power range including reactor start-up, shut-down and stationary operation. (author)

  4. DGCR8 Promotes Neural Progenitor Expansion and Represses Neurogenesis in the Mouse Embryonic Neocortex

    Directory of Open Access Journals (Sweden)

    Nadin Hoffmann

    2018-04-01

    Full Text Available DGCR8 and DROSHA are the minimal functional core of the Microprocessor complex essential for biogenesis of canonical microRNAs and for the processing of other RNAs. Conditional deletion of Dgcr8 and Drosha in the murine telencephalon indicated that these proteins exert crucial functions in corticogenesis. The identification of mechanisms of DGCR8- or DROSHA-dependent regulation of gene expression in conditional knockout mice are often complicated by massive apoptosis. Here, to investigate DGCR8 functions on amplification/differentiation of neural progenitors cells (NPCs in corticogenesis, we overexpress Dgcr8 in the mouse telencephalon, by in utero electroporation (IUEp. We find that DGCR8 promotes the expansion of NPC pools and represses neurogenesis, in absence of apoptosis, thus overcoming the usual limitations of Dgcr8 knockout-based approach. Interestingly, DGCR8 selectively promotes basal progenitor amplification at later developmental stages, entailing intriguing implications for neocortical expansion in evolution. Finally, despite a 3- to 5-fold increase of DGCR8 level in the mouse telencephalon, the composition, target preference and function of the DROSHA-dependent Microprocessor complex remain unaltered. Thus, we propose that DGCR8-dependent modulation of gene expression in corticogenesis is more complex than previously known, and possibly DROSHA-independent.

  5. Neural Architectures for Control

    Science.gov (United States)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  6. Mode Choice Modeling Using Artificial Neural Networks

    OpenAIRE

    Edara, Praveen Kumar

    2003-01-01

    Artificial intelligence techniques have produced excellent results in many diverse fields of engineering. Techniques such as neural networks and fuzzy systems have found their way into transportation engineering. In recent years, neural networks are being used instead of regression techniques for travel demand forecasting purposes. The basic reason lies in the fact that neural networks are able to capture complex relationships and learn from examples and also able to adapt when new data becom...

  7. Potential usefulness of an artificial neural network for assessing ventricular size

    International Nuclear Information System (INIS)

    Fukuda, Haruyuki; Nakajima, Hideyuki; Usuki, Noriaki; Saiwai, Shigeo; Miyamoto, Takeshi; Inoue, Yuichi; Onoyama, Yasuto.

    1995-01-01

    An artificial neural network approach was applied to assess ventricular size from computed tomograms. Three layer, feed-forward neural networks with a back propagation algorithm were designed to distinguish between three degree of enlargement of the ventricles on the basis of patient's age and six items of computed tomographic information. Data for training and testing the neural network were created with computed tomograms of the brains selected at random from daily examinations. Four radiologists decided by mutual consent subjectively based on their experience whether the ventricles were within normal limits, slightly enlarged, or enlarged for the patient's age. The data for training was obtained from 38 patients. The data for testing was obtained from 47 other patients. The performance of the neural network trained using the data for training was evaluated by the rate of correct answers to the data for testing. The valid solution ratio to response of the test data obtained from the trained neural networks was more than 90% for all conditions in this study. The solutions were completely valid in the neural networks with two or three units at the hidden layer with 2,200 learning iterations, and with two units at the hidden layer with 11,000 learning iterations. The squared error decreased remarkably in the range from 0 to 500 learning iterations, and was close to a contrast over two thousand learning iterations. The neural network with a hidden layer having two or three units showed high decision performance. The preliminary results strongly suggest that the neural network approach has potential utility in computer-aided estimation of enlargement of the ventricles. (author)

  8. Predictive Acoustic Tracking with an Adaptive Neural Mechanism

    DEFF Research Database (Denmark)

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    model of the lizard peripheral auditory system to extract information regarding sound direction. This information is utilised by a neural machinery to learn the acoustic signal’s velocity through fast and unsupervised correlation-based learning adapted from differential Hebbian learning. This approach...

  9. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  10. Higgs Machine Learning Challenge 2014

    CERN Document Server

    Olivier, A-P; Bourdarios, C ; LAL / Orsay; Goldfarb, S ; University of Michigan

    2014-01-01

    High Energy Physics (HEP) has been using Machine Learning (ML) techniques such as boosted decision trees (paper) and neural nets since the 90s. These techniques are now routinely used for difficult tasks such as the Higgs boson search. Nevertheless, formal connections between the two research fields are rather scarce, with some exceptions such as the AppStat group at LAL, founded in 2006. In collaboration with INRIA, AppStat promotes interdisciplinary research on machine learning, computational statistics, and high-energy particle and astroparticle physics. We are now exploring new ways to improve the cross-fertilization of the two fields by setting up a data challenge, following the footsteps of, among others, the astrophysics community (dark matter and galaxy zoo challenges) and neurobiology (connectomics and decoding the human brain). The organization committee consists of ATLAS physicists and machine learning researchers. The Challenge will run from Monday 12th to September 2014.

  11. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  12. RM-SORN: a reward-modulated self-organizing recurrent neural network.

    Science.gov (United States)

    Aswolinskiy, Witali; Pipa, Gordon

    2015-01-01

    Neural plasticity plays an important role in learning and memory. Reward-modulation of plasticity offers an explanation for the ability of the brain to adapt its neural activity to achieve a rewarded goal. Here, we define a neural network model that learns through the interaction of Intrinsic Plasticity (IP) and reward-modulated Spike-Timing-Dependent Plasticity (STDP). IP enables the network to explore possible output sequences and STDP, modulated by reward, reinforces the creation of the rewarded output sequences. The model is tested on tasks for prediction, recall, non-linear computation, pattern recognition, and sequence generation. It achieves performance comparable to networks trained with supervised learning, while using simple, biologically motivated plasticity rules, and rewarding strategies. The results confirm the importance of investigating the interaction of several plasticity rules in the context of reward-modulated learning and whether reward-modulated self-organization can explain the amazing capabilities of the brain.

  13. Intergenerational service learning: to promote active aging, and occupational therapy gerontology practice.

    Science.gov (United States)

    Horowitz, Beverly P; Wong, Stephanie Dapice; Dechello, Karen

    2010-01-01

    Americans are living longer, and the meaning of age has changed, particularly for Boomers and seniors. These demographic changes have economic and social ramifications with implications for health care, including rehabilitation services, and health science education. Service learning is an experiential learning pedagogy that integrates traditional higher education with structured active learning experiences. This article reports on one intergenerational service learning program spanning 3 years. It was designed to facilitate community dialogue on fall prevention and active aging, and to provide intergenerational educational community-based experiences in occupational therapy professional education. The program additionally sought to promote students' understanding of aging and issues related to aging in place, students' professional development and civic engagement, and to encourage students to consider pursuing a career in occupational therapy gerontology practice.

  14. Effect of signal noise on the learning capability of an artificial neural network

    International Nuclear Information System (INIS)

    Vega, J.J.; Reynoso, R.; Calvet, H. Carrillo

    2009-01-01

    Digital Pulse Shape Analysis (DPSA) by artificial neural networks (ANN) is becoming an important tool to extract relevant information from digitized signals in different areas. In this paper, we present a systematic evidence of how the concomitant noise that distorts the signals or patterns to be identified by an ANN set limits to its learning capability. Also, we present evidence that explains overtraining as a competition between the relevant pattern features, on the one side, against the signal noise, on the other side, as the main cause defining the shape of the error surface in weight space and, consequently, determining the steepest descent path that controls the ANN adaptation process.

  15. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    Science.gov (United States)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  16. Multi-Objective Reinforcement Learning-Based Deep Neural Networks for Cognitive Space Communications

    Science.gov (United States)

    Ferreria, Paulo Victor R.; Paffenroth, Randy; Wyglinski, Alexander M.; Hackett, Timothy M.; Bilen, Sven G.; Reinhart, Richard C.; Mortensen, Dale J.

    2017-01-01

    Future communication subsystems of space exploration missions can potentially benefit from software-defined radios (SDRs) controlled by machine learning algorithms. In this paper, we propose a novel hybrid radio resource allocation management control algorithm that integrates multi-objective reinforcement learning and deep artificial neural networks. The objective is to efficiently manage communications system resources by monitoring performance functions with common dependent variables that result in conflicting goals. The uncertainty in the performance of thousands of different possible combinations of radio parameters makes the trade-off between exploration and exploitation in reinforcement learning (RL) much more challenging for future critical space-based missions. Thus, the system should spend as little time as possible on exploring actions, and whenever it explores an action, it should perform at acceptable levels most of the time. The proposed approach enables on-line learning by interactions with the environment and restricts poor resource allocation performance through virtual environment exploration. Improvements in the multiobjective performance can be achieved via transmitter parameter adaptation on a packet-basis, with poorly predicted performance promptly resulting in rejected decisions. Simulations presented in this work considered the DVB-S2 standard adaptive transmitter parameters and additional ones expected to be present in future adaptive radio systems. Performance results are provided by analysis of the proposed hybrid algorithm when operating across a satellite communication channel from Earth to GEO orbit during clear sky conditions. The proposed approach constitutes part of the core cognitive engine proof-of-concept to be delivered to the NASA Glenn Research Center SCaN Testbed located onboard the International Space Station.

  17. Handwritten Digits Recognition Using Neural Computing

    Directory of Open Access Journals (Sweden)

    Călin Enăchescu

    2009-12-01

    Full Text Available In this paper we present a method for the recognition of handwritten digits and a practical implementation of this method for real-time recognition. A theoretical framework for the neural networks used to classify the handwritten digits is also presented.The classification task is performed using a Convolutional Neural Network (CNN. CNN is a special type of multy-layer neural network, being trained with an optimized version of the back-propagation learning algorithm.CNN is designed to recognize visual patterns directly from pixel images with minimal preprocessing, being capable to recognize patterns with extreme variability (such as handwritten characters, and with robustness to distortions and simple geometric transformations.The main contributions of this paper are related to theoriginal methods for increasing the efficiency of the learning algorithm by preprocessing the images before the learning process and a method for increasing the precision and performance for real-time applications, by removing the non useful information from the background.By combining these strategies we have obtained an accuracy of 96.76%, using as training set the NIST (National Institute of Standards and Technology database.

  18. Electroacupuncture in the repair of spinal cord injury: inhibiting the Notch signaling pathway and promoting neural stem cell proliferation

    Directory of Open Access Journals (Sweden)

    Xin Geng

    2015-01-01

    Full Text Available Electroacupuncture for the treatment of spinal cord injury has a good clinical curative effect, but the underlying mechanism is unclear. In our experiments, the spinal cord of adult Sprague-Dawley rats was clamped for 60 seconds. Dazhui (GV14 and Mingmen (GV4 acupoints of rats were subjected to electroacupuncture. Enzyme-linked immunosorbent assay revealed that the expression of serum inflammatory factors was apparently downregulated in rat models of spinal cord injury after electroacupuncture. Hematoxylin-eosin staining and immunohistochemistry results demonstrated that electroacupuncture contributed to the proliferation of neural stem cells in rat injured spinal cord, and suppressed their differentiation into astrocytes. Real-time quantitative PCR and western blot assays showed that electroacupuncture inhibited activation of the Notch signaling pathway induced by spinal cord injury. These findings indicate that electroacupuncture repaired the injured spinal cord by suppressing the Notch signaling pathway and promoting the proliferation of endogenous neural stem cells.

  19. Artificial neural network based approach to transmission lines protection

    International Nuclear Information System (INIS)

    Joorabian, M.

    1999-05-01

    The aim of this paper is to present and accurate fault detection technique for high speed distance protection using artificial neural networks. The feed-forward multi-layer neural network with the use of supervised learning and the common training rule of error back-propagation is chosen for this study. Information available locally at the relay point is passed to a neural network in order for an assessment of the fault location to be made. However in practice there is a large amount of information available, and a feature extraction process is required to reduce the dimensionality of the pattern vectors, whilst retaining important information that distinguishes the fault point. The choice of features is critical to the performance of the neural networks learning and operation. A significant feature in this paper is that an artificial neural network has been designed and tested to enhance the precision of the adaptive capabilities for distance protection

  20. Deep learning classification in asteroseismology using an improved neural network: results on 15 000 Kepler red giants and applications to K2 and TESS data

    Science.gov (United States)

    Hon, Marc; Stello, Dennis; Yu, Jie

    2018-05-01

    Deep learning in the form of 1D convolutional neural networks have previously been shown to be capable of efficiently classifying the evolutionary state of oscillating red giants into red giant branch stars and helium-core burning stars by recognizing visual features in their asteroseismic frequency spectra. We elaborate further on the deep learning method by developing an improved convolutional neural network classifier. To make our method useful for current and future space missions such as K2, TESS, and PLATO, we train classifiers that are able to classify the evolutionary states of lower frequency resolution spectra expected from these missions. Additionally, we provide new classifications for 8633 Kepler red giants, out of which 426 have previously not been classified using asteroseismology. This brings the total to 14983 Kepler red giants classified with our new neural network. We also verify that our classifiers are remarkably robust to suboptimal data, including low signal-to-noise and incorrect training truth labels.