WorldWideScience

Sample records for learning promotes neural

  1. Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks

    KAUST Repository

    Umarov, Ramzan

    2017-02-03

    Accurate computational identification of promoters remains a challenge as these key DNA regulatory regions have variable structures composed of functional motifs that provide gene-specific initiation of transcription. In this paper we utilize Convolutional Neural Networks (CNN) to analyze sequence characteristics of prokaryotic and eukaryotic promoters and build their predictive models. We trained a similar CNN architecture on promoters of five distant organisms: human, mouse, plant (Arabidopsis), and two bacteria (Escherichia coli and Bacillus subtilis). We found that CNN trained on sigma70 subclass of Escherichia coli promoter gives an excellent classification of promoters and non-promoter sequences (Sn = 0.90, Sp = 0.96, CC = 0.84). The Bacillus subtilis promoters identification CNN model achieves Sn = 0.91, Sp = 0.95, and CC = 0.86. For human, mouse and Arabidopsis promoters we employed CNNs for identification of two well-known promoter classes (TATA and non-TATA promoters). CNN models nicely recognize these complex functional regions. For human promoters Sn/Sp/CC accuracy of prediction reached 0.95/0.98/0,90 on TATA and 0.90/0.98/0.89 for non-TATA promoter sequences, respectively. For Arabidopsis we observed Sn/Sp/CC 0.95/0.97/0.91 (TATA) and 0.94/0.94/0.86 (non-TATA) promoters. Thus, the developed CNN models, implemented in CNNProm program, demonstrated the ability of deep learning approach to grasp complex promoter sequence characteristics and achieve significantly higher accuracy compared to the previously developed promoter prediction programs. We also propose random substitution procedure to discover positionally conserved promoter functional elements. As the suggested approach does not require knowledge of any specific promoter features, it can be easily extended to identify promoters and other complex functional regions in sequences of many other and especially newly sequenced genomes. The CNNProm program is available to run at web server http://www.softberry.com.

  2. Developmental song learning as a model to understand neural mechanisms that limit and promote the ability to learn.

    Science.gov (United States)

    London, Sarah E

    2017-11-20

    Songbirds famously learn their vocalizations. Some species can learn continuously, others seasonally, and still others just once. The zebra finch (Taeniopygia guttata) learns to sing during a single developmental "Critical Period," a restricted phase during which a specific experience has profound and permanent effects on brain function and behavioral patterns. The zebra finch can therefore provide fundamental insight into features that promote and limit the ability to acquire complex learned behaviors. For example, what properties permit the brain to come "on-line" for learning? How does experience become encoded to prevent future learning? What features define the brain in receptive compared to closed learning states? This piece will focus on epigenomic, genomic, and molecular levels of analysis that operate on the timescales of development and complex behavioral learning. Existing data will be discussed as they relate to Critical Period learning, and strategies for future studies to more directly address these questions will be considered. Birdsong learning is a powerful model for advancing knowledge of the biological intersections of maturation and experience. Lessons from its study not only have implications for understanding developmental song learning, but also broader questions of learning potential and the enduring effects of early life experience on neural systems and behavior. Copyright © 2017. Published by Elsevier B.V.

  3. Learning from neural control.

    Science.gov (United States)

    Wang, Cong; Hill, David J

    2006-01-01

    One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach.

  4. Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks

    KAUST Repository

    Umarov, Ramzan; Solovyev, Victor

    2017-01-01

    Accurate computational identification of promoters remains a challenge as these key DNA regulatory regions have variable structures composed of functional motifs that provide gene-specific initiation of transcription. In this paper we utilize

  5. Entropy Learning in Neural Network

    Directory of Open Access Journals (Sweden)

    Geok See Ng

    2017-12-01

    Full Text Available In this paper, entropy term is used in the learning phase of a neural network.  As learning progresses, more hidden nodes get into saturation.  The early creation of such hidden nodes may impair generalisation.  Hence entropy approach is proposed to dampen the early creation of such nodes.  The entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes.  At the end of learning, the less important nodes can then be eliminated to reduce the memory requirements of the neural network.

  6. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  7. A peptide mimetic targeting trans-homophilic NCAM binding sites promotes spatial learning and neural plasticity in the hippocampus

    DEFF Research Database (Denmark)

    Kraev, Igor; Henneberger, Christian; Rossetti, Clara

    2011-01-01

    The key roles played by the neural cell adhesion molecule (NCAM) in plasticity and cognition underscore this membrane protein as a relevant target to develop cognitive-enhancing drugs. However, NCAM is a structurally and functionally complex molecule with multiple domains engaged in a variety of ...

  8. Shaping the learning curve: epigenetic dynamics in neural plasticity

    Directory of Open Access Journals (Sweden)

    Zohar Ziv Bronfman

    2014-07-01

    Full Text Available A key characteristic of learning and neural plasticity is state-dependent acquisition dynamics reflected by the non-linear learning curve that links increase in learning with practice. Here we propose that the manner by which epigenetic states of individual cells change during learning contributes to the shape of the neural and behavioral learning curve. We base our suggestion on recent studies showing that epigenetic mechanisms such as DNA methylation, histone acetylation and RNA-mediated gene regulation are intimately involved in the establishment and maintenance of long-term neural plasticity, reflecting specific learning-histories and influencing future learning. Our model, which is the first to suggest a dynamic molecular account of the shape of the learning curve, leads to several testable predictions regarding the link between epigenetic dynamics at the promoter, gene-network and neural-network levels. This perspective opens up new avenues for therapeutic interventions in neurological pathologies.

  9. Rhesus monkey neural stem cell transplantation promotes neural regeneration in rats with hippocampal lesions

    Directory of Open Access Journals (Sweden)

    Li-juan Ye

    2016-01-01

    Full Text Available Rhesus monkey neural stem cells are capable of differentiating into neurons and glial cells. Therefore, neural stem cell transplantation can be used to promote functional recovery of the nervous system. Rhesus monkey neural stem cells (1 × 105 cells/μL were injected into bilateral hippocampi of rats with hippocampal lesions. Confocal laser scanning microscopy demonstrated that green fluorescent protein-labeled transplanted cells survived and grew well. Transplanted cells were detected at the lesion site, but also in the nerve fiber-rich region of the cerebral cortex and corpus callosum. Some transplanted cells differentiated into neurons and glial cells clustering along the ventricular wall, and integrated into the recipient brain. Behavioral tests revealed that spatial learning and memory ability improved, indicating that rhesus monkey neural stem cells noticeably improve spatial learning and memory abilities in rats with hippocampal lesions.

  10. Learning in Artificial Neural Systems

    Science.gov (United States)

    Matheus, Christopher J.; Hohensee, William E.

    1987-01-01

    This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined.

  11. Adaptive competitive learning neural networks

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2013-11-01

    Full Text Available In this paper, the adaptive competitive learning (ACL neural network algorithm is proposed. This neural network not only groups similar input feature vectors together but also determines the appropriate number of groups of these vectors. This algorithm uses a new proposed criterion referred to as the ACL criterion. This criterion evaluates different clustering structures produced by the ACL neural network for an input data set. Then, it selects the best clustering structure and the corresponding network architecture for this data set. The selected structure is composed of the minimum number of clusters that are compact and balanced in their sizes. The selected network architecture is efficient, in terms of its complexity, as it contains the minimum number of neurons. Synaptic weight vectors of these neurons represent well-separated, compact and balanced clusters in the input data set. The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters. Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

  12. Continual Learning through Evolvable Neural Turing Machines

    DEFF Research Database (Denmark)

    Lüders, Benno; Schläger, Mikkel; Risi, Sebastian

    2016-01-01

    Continual learning, i.e. the ability to sequentially learn tasks without catastrophic forgetting of previously learned ones, is an important open challenge in machine learning. In this paper we take a step in this direction by showing that the recently proposed Evolving Neural Turing Machine (ENTM...

  13. Neural plasticity of development and learning.

    Science.gov (United States)

    Galván, Adriana

    2010-06-01

    Development and learning are powerful agents of change across the lifespan that induce robust structural and functional plasticity in neural systems. An unresolved question in developmental cognitive neuroscience is whether development and learning share the same neural mechanisms associated with experience-related neural plasticity. In this article, I outline the conceptual and practical challenges of this question, review insights gleaned from adult studies, and describe recent strides toward examining this topic across development using neuroimaging methods. I suggest that development and learning are not two completely separate constructs and instead, that they exist on a continuum. While progressive and regressive changes are central to both, the behavioral consequences associated with these changes are closely tied to the existing neural architecture of maturity of the system. Eventually, a deeper, more mechanistic understanding of neural plasticity will shed light on behavioral changes across development and, more broadly, about the underlying neural basis of cognition. (c) 2010 Wiley-Liss, Inc.

  14. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    Science.gov (United States)

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  15. Psychedelics Promote Structural and Functional Neural Plasticity

    Directory of Open Access Journals (Sweden)

    Calvin Ly

    2018-06-01

    Full Text Available Summary: Atrophy of neurons in the prefrontal cortex (PFC plays a key role in the pathophysiology of depression and related disorders. The ability to promote both structural and functional plasticity in the PFC has been hypothesized to underlie the fast-acting antidepressant properties of the dissociative anesthetic ketamine. Here, we report that, like ketamine, serotonergic psychedelics are capable of robustly increasing neuritogenesis and/or spinogenesis both in vitro and in vivo. These changes in neuronal structure are accompanied by increased synapse number and function, as measured by fluorescence microscopy and electrophysiology. The structural changes induced by psychedelics appear to result from stimulation of the TrkB, mTOR, and 5-HT2A signaling pathways and could possibly explain the clinical effectiveness of these compounds. Our results underscore the therapeutic potential of psychedelics and, importantly, identify several lead scaffolds for medicinal chemistry efforts focused on developing plasticity-promoting compounds as safe, effective, and fast-acting treatments for depression and related disorders. : Ly et al. demonstrate that psychedelic compounds such as LSD, DMT, and DOI increase dendritic arbor complexity, promote dendritic spine growth, and stimulate synapse formation. These cellular effects are similar to those produced by the fast-acting antidepressant ketamine and highlight the potential of psychedelics for treating depression and related disorders. Keywords: neural plasticity, psychedelic, spinogenesis, synaptogenesis, depression, LSD, DMT, ketamine, noribogaine, MDMA

  16. Windowed active sampling for reliable neural learning

    NARCIS (Netherlands)

    Barakova, E.I; Spaanenburg, L

    The composition of the example set has a major impact on the quality of neural learning. The popular approach is focused on extensive pre-processing to bridge the representation gap between process measurement and neural presentation. In contrast, windowed active sampling attempts to solve these

  17. Machine Learning Topological Invariants with Neural Networks

    Science.gov (United States)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  18. Learning-parameter adjustment in neural networks

    Science.gov (United States)

    Heskes, Tom M.; Kappen, Bert

    1992-06-01

    We present a learning-parameter adjustment algorithm, valid for a large class of learning rules in neural-network literature. The algorithm follows directly from a consideration of the statistics of the weights in the network. The characteristic behavior of the algorithm is calculated, both in a fixed and a changing environment. A simple example, Widrow-Hoff learning for statistical classification, serves as an illustration.

  19. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Online neural monitoring of statistical learning.

    Science.gov (United States)

    Batterink, Laura J; Paller, Ken A

    2017-05-01

    The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the reaction time (RT) task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  2. Learning drifting concepts with neural networks

    NARCIS (Netherlands)

    Biehl, Michael; Schwarze, Holm

    1993-01-01

    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using

  3. Learning of N-layers neural network

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2005-01-01

    Full Text Available In the last decade we can observe increasing number of applications based on the Artificial Intelligence that are designed to solve problems from different areas of human activity. The reason why there is so much interest in these technologies is that the classical way of solutions does not exist or these technologies are not suitable because of their robustness. They are often used in applications like Business Intelligence that enable to obtain useful information for high-quality decision-making and to increase competitive advantage.One of the most widespread tools for the Artificial Intelligence are the artificial neural networks. Their high advantage is relative simplicity and the possibility of self-learning based on set of pattern situations.For the learning phase is the most commonly used algorithm back-propagation error (BPE. The base of BPE is the method minima of error function representing the sum of squared errors on outputs of neural net, for all patterns of the learning set. However, while performing BPE and in the first usage, we can find out that it is necessary to complete the handling of the learning factor by suitable method. The stability of the learning process and the rate of convergence depend on the selected method. In the article there are derived two functions: one function for the learning process management by the relative great error function value and the second function when the value of error function approximates to global minimum.The aim of the article is to introduce the BPE algorithm in compact matrix form for multilayer neural networks, the derivation of the learning factor handling method and the presentation of the results.

  4. Learning and coding in biological neural networks

    Science.gov (United States)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  5. Temporal-pattern learning in neural models

    CERN Document Server

    Genís, Carme Torras

    1985-01-01

    While the ability of animals to learn rhythms is an unquestionable fact, the underlying neurophysiological mechanisms are still no more than conjectures. This monograph explores the requirements of such mechanisms, reviews those previously proposed and postulates a new one based on a direct electric coding of stimulation frequencies. Experi­ mental support for the option taken is provided both at the single neuron and neural network levels. More specifically, the material presented divides naturally into four parts: a description of the experimental and theoretical framework where this work becomes meaningful (Chapter 2), a detailed specifica­ tion of the pacemaker neuron model proposed together with its valida­ tion through simulation (Chapter 3), an analytic study of the behavior of this model when submitted to rhythmic stimulation (Chapter 4) and a description of the neural network model proposed for learning, together with an analysis of the simulation results obtained when varying seve­ ral factors r...

  6. Learning in Neural Networks: VLSI Implementation Strategies

    Science.gov (United States)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  7. Psychedelics Promote Structural and Functional Neural Plasticity.

    Science.gov (United States)

    Ly, Calvin; Greb, Alexandra C; Cameron, Lindsay P; Wong, Jonathan M; Barragan, Eden V; Wilson, Paige C; Burbach, Kyle F; Soltanzadeh Zarandi, Sina; Sood, Alexander; Paddy, Michael R; Duim, Whitney C; Dennis, Megan Y; McAllister, A Kimberley; Ori-McKenney, Kassandra M; Gray, John A; Olson, David E

    2018-06-12

    Atrophy of neurons in the prefrontal cortex (PFC) plays a key role in the pathophysiology of depression and related disorders. The ability to promote both structural and functional plasticity in the PFC has been hypothesized to underlie the fast-acting antidepressant properties of the dissociative anesthetic ketamine. Here, we report that, like ketamine, serotonergic psychedelics are capable of robustly increasing neuritogenesis and/or spinogenesis both in vitro and in vivo. These changes in neuronal structure are accompanied by increased synapse number and function, as measured by fluorescence microscopy and electrophysiology. The structural changes induced by psychedelics appear to result from stimulation of the TrkB, mTOR, and 5-HT2A signaling pathways and could possibly explain the clinical effectiveness of these compounds. Our results underscore the therapeutic potential of psychedelics and, importantly, identify several lead scaffolds for medicinal chemistry efforts focused on developing plasticity-promoting compounds as safe, effective, and fast-acting treatments for depression and related disorders. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  8. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  9. Neural correlates of face gender discrimination learning.

    Science.gov (United States)

    Su, Junzhu; Tan, Qingleng; Fang, Fang

    2013-04-01

    Using combined psychophysics and event-related potentials (ERPs), we investigated the effect of perceptual learning on face gender discrimination and probe the neural correlates of the learning effect. Human subjects were trained to perform a gender discrimination task with male or female faces. Before and after training, they were tested with the trained faces and other faces with the same and opposite genders. ERPs responding to these faces were recorded. Psychophysical results showed that training significantly improved subjects' discrimination performance and the improvement was specific to the trained gender, as well as to the trained identities. The training effect indicates that learning occurs at two levels-the category level (gender) and the exemplar level (identity). ERP analyses showed that the gender and identity learning was associated with the N170 latency reduction at the left occipital-temporal area and the N170 amplitude reduction at the right occipital-temporal area, respectively. These findings provide evidence for the facilitation model and the sharpening model on neuronal plasticity from visual experience, suggesting a faster processing speed and a sparser representation of face induced by perceptual learning.

  10. Investigations of Escherichia coli promoter sequences with artificial neural networks: new signals discovered upstream of the transcriptional startpoint

    DEFF Research Database (Denmark)

    Pedersen, Anders Gorm; Engelbrecht, Jacob

    1995-01-01

    We present a novel method for using the learning ability of a neural network as a measure of information in local regions of input data. Using the method to analyze Escherichia coli promoters, we discover all previously described signals, and furthermore find new signals that are regularly spaced...

  11. IGF-II Promotes Stemness of Neural Restricted Precursors

    Science.gov (United States)

    Ziegler, Amber N.; Schneider, Joel S.; Qin, Mei; Tyler, William A.; Pintar, John E.; Fraidenraich, Diego; Wood, Teresa L.; Levison, Steven W.

    2016-01-01

    Insulin-like growth factor (IGF)-I and IGF-II regulate brain development and growth through the IGF type 1 receptor (IGF-1R). Less appreciated is that IGF-II, but not IGF-I, activates a splice variant of the insulin receptor (IR) known as IR-A. We hypothesized that IGF-II exerts distinct effects from IGF-I on neural stem/progenitor cells (NSPs) via its interaction with IR-A. Immunofluorescence revealed high IGF-II in the medial region of the subventricular zone (SVZ) comprising the neural stem cell niche, with IGF-II mRNA predominant in the adjacent choroid plexus. The IGF-1R and the IR isoforms were differentially expressed with IR-A predominant in the medial SVZ, whereas the IGF-1R was more abundant laterally. Similarly, IR-A was more highly expressed by NSPs, whereas the IGF-1R was more highly expressed by lineage restricted cells. In vitro, IGF-II was more potent in promoting NSP expansion than either IGF-I or standard growth medium. Limiting dilution and differentiation assays revealed that IGF-II was superior to IGF-I in promoting stemness. In vivo, NSPs propagated in IGF-II migrated to and took up residence in periventricular niches while IGF-I-treated NSPs predominantly colonized white matter. Knockdown of IR or IGF-1R using shRNAs supported the conclusion that the IGF-1R promotes progenitor proliferation, whereas the IR is important for self-renewal. Q-PCR revealed that IGF-II increased Oct4, Sox1, and FABP7 mRNA levels in NSPs. Our data support the conclusion that IGF-II promotes the self-renewal of neural stem/progenitors via the IR. By contrast, IGF-1R functions as a mitogenic receptor to increase precursor abundance. PMID:22593020

  12. Neural signals of vicarious extinction learning.

    Science.gov (United States)

    Golkar, Armita; Haaker, Jan; Selbing, Ida; Olsson, Andreas

    2016-10-01

    Social transmission of both threat and safety is ubiquitous, but little is known about the neural circuitry underlying vicarious safety learning. This is surprising given that these processes are critical to flexibly adapt to a changeable environment. To address how the expression of previously learned fears can be modified by the transmission of social information, two conditioned stimuli (CS + s) were paired with shock and the third was not. During extinction, we held constant the amount of direct, non-reinforced, exposure to the CSs (i.e. direct extinction), and critically varied whether another individual-acting as a demonstrator-experienced safety (CS + vic safety) or aversive reinforcement (CS + vic reinf). During extinction, ventromedial prefrontal cortex (vmPFC) responses to the CS + vic reinf increased but decreased to the CS + vic safety This pattern of vmPFC activity was reversed during a subsequent fear reinstatement test, suggesting a temporal shift in the involvement of the vmPFC. Moreover, only the CS + vic reinf association recovered. Our data suggest that vicarious extinction prevents the return of conditioned fear responses, and that this efficacy is reflected by diminished vmPFC involvement during extinction learning. The present findings may have important implications for understanding how social information influences the persistence of fear memories in individuals suffering from emotional disorders. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  14. Learning and adaptation: neural and behavioural mechanisms behind behaviour change

    Science.gov (United States)

    Lowe, Robert; Sandamirskaya, Yulia

    2018-01-01

    This special issue presents perspectives on learning and adaptation as they apply to a number of cognitive phenomena including pupil dilation in humans and attention in robots, natural language acquisition and production in embodied agents (robots), human-robot game play and social interaction, neural-dynamic modelling of active perception and neural-dynamic modelling of infant development in the Piagetian A-not-B task. The aim of the special issue, through its contributions, is to highlight some of the critical neural-dynamic and behavioural aspects of learning as it grounds adaptive responses in robotic- and neural-dynamic systems.

  15. Management Strategies for Promoting Teacher Collective Learning

    Science.gov (United States)

    Cheng, Eric C. K.

    2011-01-01

    This paper aims to validate a theoretical model for developing teacher collective learning by using a quasi-experimental design, and explores the management strategies that would provide a school administrator practical steps to effectively promote collective learning in the school organization. Twenty aided secondary schools in Hong Kong were…

  16. Boltzmann learning of parameters in cellular neural networks

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    1992-01-01

    The use of Bayesian methods to design cellular neural networks for signal processing tasks and the Boltzmann machine learning rule for parameter estimation is discussed. The learning rule can be used for models with hidden units, or for completely unsupervised learning. The latter is exemplified...

  17. Introduction to spiking neural networks: Information processing, learning and applications.

    Science.gov (United States)

    Ponulak, Filip; Kasinski, Andrzej

    2011-01-01

    The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.

  18. Promoting learning transfer in preceptor preparation.

    Science.gov (United States)

    Finn, Frances L; Chesser-Smyth, Patricia

    2013-01-01

    An understanding of learning transfer principles is essential for professional development educators and managers to ensure that new skills and knowledge learned are applied to practice. This article presents a collaborative project involving the planning, design, and implementation of a preceptor training program for registered nurses. The theories and principles discussed in this article could be applied to a number of different settings and contexts in health care to promote learning transfer in professional development activities.

  19. Neural correlates of contextual cueing are modulated by explicit learning.

    Science.gov (United States)

    Westerberg, Carmen E; Miller, Brennan B; Reber, Paul J; Cohen, Neal J; Paller, Ken A

    2011-10-01

    Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer's knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Using machine learning, neural networks and statistics to predict bankruptcy

    NARCIS (Netherlands)

    Pompe, P.P.M.; Feelders, A.J.; Feelders, A.J.

    1997-01-01

    Recent literature strongly suggests that machine learning approaches to classification outperform "classical" statistical methods. We make a comparison between the performance of linear discriminant analysis, classification trees, and neural networks in predicting corporate bankruptcy. Linear

  1. Embedding responses in spontaneous neural activity shaped through sequential learning.

    Directory of Open Access Journals (Sweden)

    Tomoki Kurikawa

    Full Text Available Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, "memories-as-bifurcations," that differs from the traditional "memories-as-attractors" viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in

  2. Deep Learning Neural Networks in Cybersecurity - Managing Malware with AI

    OpenAIRE

    Rayle, Keith

    2017-01-01

    There’s a lot of talk about the benefits of deep learning (neural networks) and how it’s the new electricity that will power us into the future. Medical diagnosis, computer vision and speech recognition are all examples of use-cases where neural networks are being applied in our everyday business environment. This begs the question…what are the uses of neural-network applications for cyber security? How does the AI process work when applying neural networks to detect malicious software bombar...

  3. Do Convolutional Neural Networks Learn Class Hierarchy?

    Science.gov (United States)

    Bilal, Alsallakh; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2018-01-01

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  4. Neural Behavior Chain Learning of Mobile Robot Actions

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2012-01-01

    Full Text Available This paper presents a visual/motor behavior learning approach, based on neural networks. We propose Behavior Chain Model (BCM in order to create a way of behavior learning. Our behavior-based system evolution task is a mobile robot detecting a target and driving/acting towards it. First, the mapping relations between the image feature domain of the object and the robot action domain are derived. Second, a multilayer neural network for offline learning of the mapping relations is used. This learning structure through neural network training process represents a connection between the visual perceptions and motor sequence of actions in order to grip a target. Last, using behavior learning through a noticed action chain, we can predict mobile robot behavior for a variety of similar tasks in similar environment. Prediction results suggest that the methodology is adequate and could be recognized as an idea for designing different mobile robot behaviour assistance.

  5. The neural circuit basis of learning

    Science.gov (United States)

    Patrick, Kaifosh William John

    The astounding capacity for learning ranks among the nervous system's most impressive features. This thesis comprises studies employing varied approaches to improve understanding, at the level of neural circuits, of the brain's capacity for learning. The first part of the thesis contains investigations of hippocampal circuitry -- both theoretical work and experimental work in the mouse Mus musculus -- as a model system for declarative memory. To begin, Chapter 2 presents a theory of hippocampal memory storage and retrieval that reflects nonlinear dendritic processing within hippocampal pyramidal neurons. As a prelude to the experimental work that comprises the remainder of this part, Chapter 3 describes an open source software platform that we have developed for analysis of data acquired with in vivo Ca2+ imaging, the main experimental technique used throughout the remainder of this part of the thesis. As a first application of this technique, Chapter 4 characterizes the content of signaling at synapses between GABAergic neurons of the medial septum and interneurons in stratum oriens of hippocampal area CA1. Chapter 5 then combines these techniques with optogenetic, pharmacogenetic, and pharmacological manipulations to uncover inhibitory circuit mechanisms underlying fear learning. The second part of this thesis focuses on the cerebellum-like electrosensory lobe in the weakly electric mormyrid fish Gnathonemus petersii, as a model system for non-declarative memory. In Chapter 6, we study how short-duration EOD motor commands are recoded into a complex temporal basis in the granule cell layer, which can be used to cancel Purkinje-like cell firing to the longer duration and temporally varying EOD-driven sensory responses. In Chapter 7, we consider not only the temporal aspects of the granule cell code, but also the encoding of body position provided from proprioceptive and efference copy sources. Together these studies clarify how the cerebellum-like circuitry of the

  6. Learning in neural networks based on a generalized fluctuation theorem

    Science.gov (United States)

    Hayakawa, Takashi; Aoyagi, Toshio

    2015-11-01

    Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universal laws describing the fluctuation they exhibit and the information they possess have recently been discovered. These laws are termed fluctuation theorems. In the present study, we formulate a theory of learning in neural networks bidirectionally interacting with environments based on the principle of information maximization. Our formulation begins with the introduction of a generalized fluctuation theorem, employing an interpretation appropriate for the present application, which differs from the original thermodynamic interpretation. We analytically and numerically demonstrate that the learning mechanism presented in our theory allows neural networks to efficiently explore their environments and optimally encode information about them.

  7. Opponent appetitive-aversive neural processes underlie predictive learning of pain relief.

    Science.gov (United States)

    Seymour, Ben; O'Doherty, John P; Koltzenburg, Martin; Wiech, Katja; Frackowiak, Richard; Friston, Karl; Dolan, Raymond

    2005-09-01

    Termination of a painful or unpleasant event can be rewarding. However, whether the brain treats relief in a similar way as it treats natural reward is unclear, and the neural processes that underlie its representation as a motivational goal remain poorly understood. We used fMRI (functional magnetic resonance imaging) to investigate how humans learn to generate expectations of pain relief. Using a pavlovian conditioning procedure, we show that subjects experiencing prolonged experimentally induced pain can be conditioned to predict pain relief. This proceeds in a manner consistent with contemporary reward-learning theory (average reward/loss reinforcement learning), reflected by neural activity in the amygdala and midbrain. Furthermore, these reward-like learning signals are mirrored by opposite aversion-like signals in lateral orbitofrontal cortex and anterior cingulate cortex. This dual coding has parallels to 'opponent process' theories in psychology and promotes a formal account of prediction and expectation during pain.

  8. Neural Correlates of High Performance in Foreign Language Vocabulary Learning

    Science.gov (United States)

    Macedonia, Manuela; Muller, Karsten; Friederici, Angela D.

    2010-01-01

    Learning vocabulary in a foreign language is a laborious task which people perform with varying levels of success. Here, we investigated the neural underpinning of high performance on this task. In a within-subjects paradigm, participants learned 92 vocabulary items under two multimodal conditions: one condition paired novel words with iconic…

  9. Vicarious neural processing of outcomes during observational learning.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on the ability to map the actions of others onto our own, process outcomes, and combine these sources of information. Here, we combined newly developed experimental tasks and functional magnetic resonance imaging (fMRI to investigate the neural mechanisms that govern such observational learning. Results show that the neural systems involved in individual trial-and-error learning and in action observation and execution both participate in observational learning. In addition, we identified brain areas that specifically activate for others' incorrect outcomes during learning in the posterior medial frontal cortex (pMFC, the anterior insula and the posterior superior temporal sulcus (pSTS.

  10. Vicarious neural processing of outcomes during observational learning.

    Science.gov (United States)

    Monfardini, Elisabetta; Gazzola, Valeria; Boussaoud, Driss; Brovelli, Andrea; Keysers, Christian; Wicker, Bruno

    2013-01-01

    Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on the ability to map the actions of others onto our own, process outcomes, and combine these sources of information. Here, we combined newly developed experimental tasks and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms that govern such observational learning. Results show that the neural systems involved in individual trial-and-error learning and in action observation and execution both participate in observational learning. In addition, we identified brain areas that specifically activate for others' incorrect outcomes during learning in the posterior medial frontal cortex (pMFC), the anterior insula and the posterior superior temporal sulcus (pSTS).

  11. Learning-induced neural plasticity of speech processing before birth.

    Science.gov (United States)

    Partanen, Eino; Kujala, Teija; Näätänen, Risto; Liitola, Auli; Sambeth, Anke; Huotilainen, Minna

    2013-09-10

    Learning, the foundation of adaptive and intelligent behavior, is based on plastic changes in neural assemblies, reflected by the modulation of electric brain responses. In infancy, auditory learning implicates the formation and strengthening of neural long-term memory traces, improving discrimination skills, in particular those forming the prerequisites for speech perception and understanding. Although previous behavioral observations show that newborns react differentially to unfamiliar sounds vs. familiar sound material that they were exposed to as fetuses, the neural basis of fetal learning has not thus far been investigated. Here we demonstrate direct neural correlates of human fetal learning of speech-like auditory stimuli. We presented variants of words to fetuses; unlike infants with no exposure to these stimuli, the exposed fetuses showed enhanced brain activity (mismatch responses) in response to pitch changes for the trained variants after birth. Furthermore, a significant correlation existed between the amount of prenatal exposure and brain activity, with greater activity being associated with a higher amount of prenatal speech exposure. Moreover, the learning effect was generalized to other types of similar speech sounds not included in the training material. Consequently, our results indicate neural commitment specifically tuned to the speech features heard before birth and their memory representations.

  12. Humor: a pedagogical tool to promote learning.

    Science.gov (United States)

    Chabeli, M

    2008-09-01

    It has become critical that learners are exposed to varied methods of teaching and assessment that will promote critical thinking of learners. Humor creates a relaxed atmosphere where learning can be enhanced and appreciated. When learners are relaxed, thinking becomes eminent. Authoritative and tense environment hinders thinking. This paper seeks to explore the perceptions of nurse teacher learners regarding the use of humor as a pedagogical tool to promote learning. A qualitative, exploratory, descriptive and contextual research design was employed (Burns & Grove, 2001:61; Mouton, 1996:103). 130 naive sketches were collected from nurse teacher learners who volunteered to take part in the study (Giorgi in_Omery, 1983:52) Follow up interviews were conducted to verify the findings. A qualitative, open-coding method of content analysis was done Tesch (in Creswell, 1994:155). Measures to ensure trustworthiness of the study were taken in accordance with the protocol of (Lincoln & Guba, 1985:290-326). The findings of the study will assist the nurse educators to create a positive, affective, psychological and social learning environment through the use of humor in a positive manner. Nurse educators will appreciate the fact that integration of humor to the learning content will promote the learners' critical thinking and emotional intelligence. Negative humor has a negative impact on learning. Learner nurses who become critical thinkers will be able to be analytical and solve problems amicably in practice.

  13. Do neural nets learn statistical laws behind natural language?

    Directory of Open Access Journals (Sweden)

    Shuntaro Takahashi

    Full Text Available The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.

  14. Neural Monkey: An Open-source Tool for Sequence Learning

    Directory of Open Access Journals (Sweden)

    Helcl Jindřich

    2017-04-01

    Full Text Available In this paper, we announce the development of Neural Monkey – an open-source neural machine translation (NMT and general sequence-to-sequence learning system built over the TensorFlow machine learning library. The system provides a high-level API tailored for fast prototyping of complex architectures with multiple sequence encoders and decoders. Models’ overall architecture is specified in easy-to-read configuration files. The long-term goal of the Neural Monkey project is to create and maintain a growing collection of implementations of recently proposed components or methods, and therefore it is designed to be easily extensible. Trained models can be deployed either for batch data processing or as a web service. In the presented paper, we describe the design of the system and introduce the reader to running experiments using Neural Monkey.

  15. Promoting Learning: What Universities Don't Do

    Science.gov (United States)

    Martin, Brian

    2018-01-01

    Universities seek to promote student learning, but assessment and credentials can undermine students' intrinsic motivation to learn. Findings from research on how people learn, mindsets, expert performance and good health are seldom incorporated into the way universities organise learning experiences.

  16. Fastest learning in small-world neural networks

    International Nuclear Information System (INIS)

    Simard, D.; Nadeau, L.; Kroeger, H.

    2005-01-01

    We investigate supervised learning in neural networks. We consider a multi-layered feed-forward network with back propagation. We find that the network of small-world connectivity reduces the learning error and learning time when compared to the networks of regular or random connectivity. Our study has potential applications in the domain of data-mining, image processing, speech recognition, and pattern recognition

  17. Development switch in neural circuitry underlying odor-malaise learning.

    Science.gov (United States)

    Shionoya, Kiseko; Moriceau, Stephanie; Lunday, Lauren; Miner, Cathrine; Roth, Tania L; Sullivan, Regina M

    2006-01-01

    Fetal and infant rats can learn to avoid odors paired with illness before development of brain areas supporting this learning in adults, suggesting an alternate learning circuit. Here we begin to document the transition from the infant to adult neural circuit underlying odor-malaise avoidance learning using LiCl (0.3 M; 1% of body weight, ip) and a 30-min peppermint-odor exposure. Conditioning groups included: Paired odor-LiCl, Paired odor-LiCl-Nursing, LiCl, and odor-saline. Results showed that Paired LiCl-odor conditioning induced a learned odor aversion in postnatal day (PN) 7, 12, and 23 pups. Odor-LiCl Paired Nursing induced a learned odor preference in PN7 and PN12 pups but blocked learning in PN23 pups. 14C 2-deoxyglucose (2-DG) autoradiography indicated enhanced olfactory bulb activity in PN7 and PN12 pups with odor preference and avoidance learning. The odor aversion in weanling aged (PN23) pups resulted in enhanced amygdala activity in Paired odor-LiCl pups, but not if they were nursing. Thus, the neural circuit supporting malaise-induced aversions changes over development, indicating that similar infant and adult-learned behaviors may have distinct neural circuits.

  18. Smartphones Promote Autonomous Learning in ESL Classrooms

    Directory of Open Access Journals (Sweden)

    Viji Ramamuruthy

    2015-10-01

    Full Text Available The rapid development of high-technology has caused new inventions of gadgets for all walks of life regardless age. In this rapidly advancing technology era many individuals possess hi-tech gadgets such as laptops, tablets, iPad, android phones and smart phones. Adult learners in higher learning institution especially are fond of using smart phones. Students become passive in the classrooms as they are glued to their smart phones. This situation triggers the question of whether learning really takes place while the students are too engaged with their smart phones in the ESL classroom. In this context, the following questions are framed to investigate this issue: What type of learning skills are gained by using smartphones in ESL classrooms? Does smartphone use promote the autonomous learning process? To what extent do learners rely on the lecturers in addition to the usage of smartphones? What are the learning satisfactions gained by ESL learners using smartphones? A total of 70 smartphone users in the age range 18 to 26 years participated in this quantitative study. Questionnaires eliciting demographic details of the respondents, learning skills, learning satisfaction, students' perception on teacher's role in the ESL classroom and autonomous learning were distributed to all the randomly chosen samples. The data were then analyzed by using SPSS version 16. The findings revealed that smartphone use boosted learners’ critical thinking, creative thinking, communication and collaboration skills. In fact, learners gain great satisfaction in the learning process through smartphones. Although learners have moved toward autonomous learning, they are still reliant on the teachers to achieve their learning goals.

  19. Exploring the spatio-temporal neural basis of face learning

    Science.gov (United States)

    Yang, Ying; Xu, Yang; Jew, Carol A.; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.

    2017-01-01

    Humans are experts at face individuation. Although previous work has identified a network of face-sensitive regions and some of the temporal signatures of face processing, as yet, we do not have a clear understanding of how such face-sensitive regions support learning at different time points. To study the joint spatio-temporal neural basis of face learning, we trained subjects to categorize two groups of novel faces and recorded their neural responses using magnetoencephalography (MEG) throughout learning. A regression analysis of neural responses in face-sensitive regions against behavioral learning curves revealed significant correlations with learning in the majority of the face-sensitive regions in the face network, mostly between 150–250 ms, but also after 300 ms. However, the effect was smaller in nonventral regions (within the superior temporal areas and prefrontal cortex) than that in the ventral regions (within the inferior occipital gyri (IOG), midfusiform gyri (mFUS) and anterior temporal lobes). A multivariate discriminant analysis also revealed that IOG and mFUS, which showed strong correlation effects with learning, exhibited significant discriminability between the two face categories at different time points both between 150–250 ms and after 300 ms. In contrast, the nonventral face-sensitive regions, where correlation effects with learning were smaller, did exhibit some significant discriminability, but mainly after 300 ms. In sum, our findings indicate that early and recurring temporal components arising from ventral face-sensitive regions are critically involved in learning new faces. PMID:28570739

  20. Deep learning classification in asteroseismology using an improved neural network

    DEFF Research Database (Denmark)

    Hon, Marc; Stello, Dennis; Yu, Jie

    2018-01-01

    Deep learning in the form of 1D convolutional neural networks have previously been shown to be capable of efficiently classifying the evolutionary state of oscillating red giants into red giant branch stars and helium-core burning stars by recognizing visual features in their asteroseismic...... frequency spectra. We elaborate further on the deep learning method by developing an improved convolutional neural network classifier. To make our method useful for current and future space missions such as K2, TESS, and PLATO, we train classifiers that are able to classify the evolutionary states of lower...

  1. Evolving Neural Turing Machines for Reward-based Learning

    DEFF Research Database (Denmark)

    Greve, Rasmus Boll; Jacobsen, Emil Juul; Risi, Sebastian

    2016-01-01

    An unsolved problem in neuroevolution (NE) is to evolve artificial neural networks (ANN) that can store and use information to change their behavior online. While plastic neural networks have shown promise in this context, they have difficulties retaining information over longer periods of time...... version of the double T-Maze, a complex reinforcement-like learning problem. In the T-Maze learning task the agent uses the memory bank to display adaptive behavior that normally requires a plastic ANN, thereby suggesting a complementary and effective mechanism for adaptive behavior in NE....

  2. Deep learning with convolutional neural network in radiology.

    Science.gov (United States)

    Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu

    2018-04-01

    Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.

  3. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Zenke, Friedemann; Ganguli, Surya

    2018-04-13

    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

  4. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  5. Learning language with the wrong neural scaffolding: The cost of neural commitment to sounds.

    Directory of Open Access Journals (Sweden)

    Amy Sue Finn

    2013-11-01

    Full Text Available Does tuning to one’s native language explain the sensitive period for language learning? We explore the idea that tuning to (or becoming more selective for the properties of one’s native-language could result in being less open (or plastic for tuning to the properties of a new language. To explore how this might lead to the sensitive period for grammar learning, we ask if tuning to an earlier-learned aspect of language (sound structure has an impact on the neural representation of a later-learned aspect (grammar. English-speaking adults learned one of two miniature artificial languages over 4 days in the lab. Compared to English, both languages had novel grammar, but only one was comprised of novel sounds. After learning a language, participants were scanned while judging the grammaticality of sentences. Judgments were performed for the newly learned language and English. Learners of the similar-sounds language recruited regions that overlapped more with English. Learners of the distinct-sounds language, however, recruited the Superior Temporal Gyrus (STG to a greater extent, which was coactive with the Inferior Frontal Gyrus (IFG. Across learners, recruitment of IFG (but not STG predicted both learning success in tests conducted prior to the scan and grammatical judgment ability during the scan. Data suggest that adults’ difficulty learning language, especially grammar, could be due, at least in part, to the neural commitments they have made to the lower level linguistic components of their native language.

  6. Learning language with the wrong neural scaffolding: the cost of neural commitment to sounds

    Science.gov (United States)

    Finn, Amy S.; Hudson Kam, Carla L.; Ettlinger, Marc; Vytlacil, Jason; D'Esposito, Mark

    2013-01-01

    Does tuning to one's native language explain the “sensitive period” for language learning? We explore the idea that tuning to (or becoming more selective for) the properties of one's native-language could result in being less open (or plastic) for tuning to the properties of a new language. To explore how this might lead to the sensitive period for grammar learning, we ask if tuning to an earlier-learned aspect of language (sound structure) has an impact on the neural representation of a later-learned aspect (grammar). English-speaking adults learned one of two miniature artificial languages (MALs) over 4 days in the lab. Compared to English, both languages had novel grammar, but only one was comprised of novel sounds. After learning a language, participants were scanned while judging the grammaticality of sentences. Judgments were performed for the newly learned language and English. Learners of the similar-sounds language recruited regions that overlapped more with English. Learners of the distinct-sounds language, however, recruited the Superior Temporal Gyrus (STG) to a greater extent, which was coactive with the Inferior Frontal Gyrus (IFG). Across learners, recruitment of IFG (but not STG) predicted both learning success in tests conducted prior to the scan and grammatical judgment ability during the scan. Data suggest that adults' difficulty learning language, especially grammar, could be due, at least in part, to the neural commitments they have made to the lower level linguistic components of their native language. PMID:24273497

  7. Neural-Fitted TD-Leaf Learning for Playing Othello With Structured Neural Networks

    NARCIS (Netherlands)

    van den Dries, Sjoerd; Wiering, Marco A.

    This paper describes a methodology for quickly learning to play games at a strong level. The methodology consists of a novel combination of three techniques, and a variety of experiments on the game of Othello demonstrates their usefulness. First, structures or topologies in neural network

  8. PROMOTING AUTONOMOUS LEARNING IN READING CLASS

    Directory of Open Access Journals (Sweden)

    Agus Sholeh

    2015-11-01

    Full Text Available To have good acquisition and awareness in reading, the learners need a long and continuous process, and therefore, they are required to have autonomy in learning reading. This study aims to promote learner autonomy in reading class by combining learner-centered reading teaching and extensive reading teaching. Learner-centered reading teaching was carried out through group discussion, presentation, and language awareness activities. Meanwhile, extensive reading teaching was done to review the learners‘ materials in presentation and reinforce their acquisition. Those two different approaches were applied due to differences on learner's characteristics and needs. The result showed some success in the practice of autonomy, indicated by changes on learners' attitude. However, many learners showed that they focused more on obtaining score than on developing their language acquisition. By implementing the approach, the teacher can assist learners to be aware of their ability to learn independently and equip them with the skill needed for long-life learning.

  9. Learning and Generalisation in Neural Networks with Local Preprocessing

    OpenAIRE

    Kutsia, Merab

    2007-01-01

    We study learning and generalisation ability of a specific two-layer feed-forward neural network and compare its properties to that of a simple perceptron. The input patterns are mapped nonlinearly onto a hidden layer, much larger than the input layer, and this mapping is either fixed or may result from an unsupervised learning process. Such preprocessing of initially uncorrelated random patterns results in the correlated patterns in the hidden layer. The hidden-to-output mapping of the net...

  10. Learning and forgetting on asymmetric, diluted neural networks

    International Nuclear Information System (INIS)

    Derrida, B.; Nadal, J.P.

    1987-01-01

    It is possible to construct diluted asymmetric models of neural networks for which the dynamics can be calculated exactly. The authors test several learning schemes, in particular, models for which the values of the synapses remain bounded and depend on the history. Our analytical results on the relative efficiencies of the various learning schemes are qualitatively similar to the corresponding ones obtained numerically on fully connected symmetric networks

  11. Genetic learning in rule-based and neural systems

    Science.gov (United States)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  12. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    Science.gov (United States)

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  13. Thermodynamic efficiency of learning a rule in neural networks

    Science.gov (United States)

    Goldt, Sebastian; Seifert, Udo

    2017-11-01

    Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.

  14. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  15. Finite time convergent learning law for continuous neural networks.

    Science.gov (United States)

    Chairez, Isaac

    2014-02-01

    This paper addresses the design of a discontinuous finite time convergent learning law for neural networks with continuous dynamics. The neural network was used here to obtain a non-parametric model for uncertain systems described by a set of ordinary differential equations. The source of uncertainties was the presence of some external perturbations and poor knowledge of the nonlinear function describing the system dynamics. A new adaptive algorithm based on discontinuous algorithms was used to adjust the weights of the neural network. The adaptive algorithm was derived by means of a non-standard Lyapunov function that is lower semi-continuous and differentiable in almost the whole space. A compensator term was included in the identifier to reject some specific perturbations using a nonlinear robust algorithm. Two numerical examples demonstrated the improvements achieved by the learning algorithm introduced in this paper compared to classical schemes with continuous learning methods. The first one dealt with a benchmark problem used in the paper to explain how the discontinuous learning law works. The second one used the methane production model to show the benefits in engineering applications of the learning law proposed in this paper. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Learning-induced pattern classification in a chaotic neural network

    International Nuclear Information System (INIS)

    Li, Yang; Zhu, Ping; Xie, Xiaoping; He, Guoguang; Aihara, Kazuyuki

    2012-01-01

    In this Letter, we propose a Hebbian learning rule with passive forgetting (HLRPF) for use in a chaotic neural network (CNN). We then define the indices based on the Euclidean distance to investigate the evolution of the weights in a simplified way. Numerical simulations demonstrate that, under suitable external stimulations, the CNN with the proposed HLRPF acts as a fuzzy-like pattern classifier that performs much better than an ordinary CNN. The results imply relationship between learning and recognition. -- Highlights: ► Proposing a Hebbian learning rule with passive forgetting (HLRPF). ► Defining indices to investigate the evolution of the weights simply. ► The chaotic neural network with HLRPF acts as a fuzzy-like pattern classifier. ► The pattern classifier ability of the network is improved much.

  17. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    Science.gov (United States)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  18. Biologically-inspired Learning in Pulsed Neural Networks

    DEFF Research Database (Denmark)

    Lehmann, Torsten; Woodburn, Robin

    1999-01-01

    Self-learning chips to implement many popular ANN (artificial neural network) algorithms are very difficult to design. We explain why this is so and say what lessons previous work teaches us in the design of self-learning systems. We offer a contribution to the `biologically-inspired' approach......, explaining what we mean by this term and providing an example of a robust, self-learning design that can solve simple classical-conditioning tasks. We give details of the design of individual circuits to perform component functions, which can then be combined into a network to solve the task. We argue...

  19. Epigenetic learning in non-neural organisms

    Indian Academy of Sciences (India)

    Prakash

    2008-09-19

    Sep 19, 2008 ... neurobiology and psychology directly implies latency and learning. However ... The notion of cell memory is important in studies of cell biology and .... Paramecium following induction of new phenotypes by various physical ...

  20. Neural Correlates of Morphology Acquisition through a Statistical Learning Paradigm.

    Science.gov (United States)

    Sandoval, Michelle; Patterson, Dianne; Dai, Huanping; Vance, Christopher J; Plante, Elena

    2017-01-01

    The neural basis of statistical learning as it occurs over time was explored with stimuli drawn from a natural language (Russian nouns). The input reflected the "rules" for marking categories of gendered nouns, without making participants explicitly aware of the nature of what they were to learn. Participants were scanned while listening to a series of gender-marked nouns during four sequential scans, and were tested for their learning immediately after each scan. Although participants were not told the nature of the learning task, they exhibited learning after their initial exposure to the stimuli. Independent component analysis of the brain data revealed five task-related sub-networks. Unlike prior statistical learning studies of word segmentation, this morphological learning task robustly activated the inferior frontal gyrus during the learning period. This region was represented in multiple independent components, suggesting it functions as a network hub for this type of learning. Moreover, the results suggest that subnetworks activated by statistical learning are driven by the nature of the input, rather than reflecting a general statistical learning system.

  1. Differential theory of learning for efficient neural network pattern recognition

    Science.gov (United States)

    Hampshire, John B., II; Vijaya Kumar, Bhagavatula

    1993-09-01

    We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generate well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.

  2. Biologically based neural circuit modelling for the study of fear learning and extinction

    Science.gov (United States)

    Nair, Satish S.; Paré, Denis; Vicentic, Aleksandra

    2016-11-01

    The neuronal systems that promote protective defensive behaviours have been studied extensively using Pavlovian conditioning. In this paradigm, an initially neutral-conditioned stimulus is paired with an aversive unconditioned stimulus leading the subjects to display behavioural signs of fear. Decades of research into the neural bases of this simple behavioural paradigm uncovered that the amygdala, a complex structure comprised of several interconnected nuclei, is an essential part of the neural circuits required for the acquisition, consolidation and expression of fear memory. However, emerging evidence from the confluence of electrophysiological, tract tracing, imaging, molecular, optogenetic and chemogenetic methodologies, reveals that fear learning is mediated by multiple connections between several amygdala nuclei and their distributed targets, dynamical changes in plasticity in local circuit elements as well as neuromodulatory mechanisms that promote synaptic plasticity. To uncover these complex relations and analyse multi-modal data sets acquired from these studies, we argue that biologically realistic computational modelling, in conjunction with experiments, offers an opportunity to advance our understanding of the neural circuit mechanisms of fear learning and to address how their dysfunction may lead to maladaptive fear responses in mental disorders.

  3. Media Multitasking and Cognitive, Psychological, Neural, and Learning Differences.

    Science.gov (United States)

    Uncapher, Melina R; Lin, Lin; Rosen, Larry D; Kirkorian, Heather L; Baron, Naomi S; Bailey, Kira; Cantor, Joanne; Strayer, David L; Parsons, Thomas D; Wagner, Anthony D

    2017-11-01

    American youth spend more time with media than any other waking activity: an average of 7.5 hours per day, every day. On average, 29% of that time is spent juggling multiple media streams simultaneously (ie, media multitasking). This phenomenon is not limited to American youth but is paralleled across the globe. Given that a large number of media multitaskers (MMTs) are children and young adults whose brains are still developing, there is great urgency to understand the neurocognitive profiles of MMTs. It is critical to understand the relation between the relevant cognitive domains and underlying neural structure and function. Of equal importance is understanding the types of information processing that are necessary in 21st century learning environments. The present review surveys the growing body of evidence demonstrating that heavy MMTs show differences in cognition (eg, poorer memory), psychosocial behavior (eg, increased impulsivity), and neural structure (eg, reduced volume in anterior cingulate cortex). Furthermore, research indicates that multitasking with media during learning (in class or at home) can negatively affect academic outcomes. Until the direction of causality is understood (whether media multitasking causes such behavioral and neural differences or whether individuals with such differences tend to multitask with media more often), the data suggest that engagement with concurrent media streams should be thoughtfully considered. Findings from such research promise to inform policy and practice on an increasingly urgent societal issue while significantly advancing our understanding of the intersections between cognitive, psychosocial, neural, and academic factors. Copyright © 2017 by the American Academy of Pediatrics.

  4. Bio-Inspired Neural Model for Learning Dynamic Models

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  5. Maximing Learning Strategies to Promote Learner Autonomy

    Directory of Open Access Journals (Sweden)

    Junaidi Mistar

    2001-01-01

    Full Text Available Learning a new language is ultimately to be able to communicate with it. Encouraging a sense of responsibility on the part of the learners is crucial for training them to be proficient communicators. As such, understanding the strategies that they employ in acquiring the language skill is important to come to ideas of how to promote learner autonomy. Research recently conducted with three different groups of learners of English at the tertiary education level in Malang indicated that they used metacognitive and social startegies at a high frequency, while memory, cognitive, conpensation, and affective strategies were exercised at a medium frewuency. This finding implies that the learners have acquired some degrees of autonomy because metacognive strategies requires them to independently make plans for their learning activities as well as evaluate the progress, and social strategies requires them to independently enhance communicative interactions with other people. Further actions are then to be taken increase their learning autonomy, that is by intensifying the practice of use of the other four strategy categories, which are not yet applied intensively.

  6. Neural correlates of learning to attend

    Directory of Open Access Journals (Sweden)

    Todd A Kelley

    2010-11-01

    Full Text Available Recent work has shown that training can improve attentional focus. Little is known, however, about how training in attention and multitasking affects the brain. We used functional magnetic resonance imaging (fMRI to measure changes in cortical responses to distracting stimuli during training on a visual categorization task. Training led to a reduction in behavioural distraction effects, and these improvements in performance generalized to untrained conditions. Although large regions of early visual and posterior parietal cortices responded to the presence of distractors, these regions did not exhibit significant changes in their response following training. In contrast, middle frontal gyrus did exhibit decreased distractor-related responses with practice, showing the same trend as behaviour for previously observed distractor locations. However, the neural response in this region diverged from behaviour for novel distractor locations, showing greater activity. We conclude that training did not change the robustness of the initial sensory response, but led to increased efficiency in late-stage filtering in the trained conditions.

  7. Self-teaching neural network learns difficult reactor control problem

    International Nuclear Information System (INIS)

    Jouse, W.C.

    1989-01-01

    A self-teaching neural network used as an adaptive controller quickly learns to control an unstable reactor configuration. The network models the behavior of a human operator. It is trained by allowing it to operate the reactivity control impulsively. It is punished whenever either the power or fuel temperature stray outside technical limits. Using a simple paradigm, the network constructs an internal representation of the punishment and of the reactor system. The reactor is constrained to small power orbits

  8. Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

    OpenAIRE

    Shen, Li; Lin, Zhouchen; Huang, Qingming

    2015-01-01

    Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015...

  9. Neural network representation and learning of mappings and their derivatives

    Science.gov (United States)

    White, Halbert; Hornik, Kurt; Stinchcombe, Maxwell; Gallant, A. Ronald

    1991-01-01

    Discussed here are recent theorems proving that artificial neural networks are capable of approximating an arbitrary mapping and its derivatives as accurately as desired. This fact forms the basis for further results establishing the learnability of the desired approximations, using results from non-parametric statistics. These results have potential applications in robotics, chaotic dynamics, control, and sensitivity analysis. An example involving learning the transfer function and its derivatives for a chaotic map is discussed.

  10. Self-learning Monte Carlo with deep neural networks

    Science.gov (United States)

    Shen, Huitao; Liu, Junwei; Fu, Liang

    2018-05-01

    The self-learning Monte Carlo (SLMC) method is a general algorithm to speedup MC simulations. Its efficiency has been demonstrated in various systems by introducing an effective model to propose global moves in the configuration space. In this paper, we show that deep neural networks can be naturally incorporated into SLMC, and without any prior knowledge can learn the original model accurately and efficiently. Demonstrated in quantum impurity models, we reduce the complexity for a local update from O (β2) in Hirsch-Fye algorithm to O (β lnβ ) , which is a significant speedup especially for systems at low temperatures.

  11. Asymmetric Variate Generation via a Parameterless Dual Neural Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Simone Fiori

    2008-01-01

    Full Text Available In a previous work (S. Fiori, 2006, we proposed a random number generator based on a tunable non-linear neural system, whose learning rule is designed on the basis of a cardinal equation from statistics and whose implementation is based on look-up tables (LUTs. The aim of the present manuscript is to improve the above-mentioned random number generation method by changing the learning principle, while retaining the efficient LUT-based implementation. The new method proposed here proves easier to implement and relaxes some previous limitations.

  12. Learning speaker-specific characteristics with a deep neural architecture.

    Science.gov (United States)

    Chen, Ke; Salman, Ahmad

    2011-11-01

    Speech signals convey various yet mixed information ranging from linguistic to speaker-specific information. However, most of acoustic representations characterize all different kinds of information as whole, which could hinder either a speech or a speaker recognition (SR) system from producing a better performance. In this paper, we propose a novel deep neural architecture (DNA) especially for learning speaker-specific characteristics from mel-frequency cepstral coefficients, an acoustic representation commonly used in both speech recognition and SR, which results in a speaker-specific overcomplete representation. In order to learn intrinsic speaker-specific characteristics, we come up with an objective function consisting of contrastive losses in terms of speaker similarity/dissimilarity and data reconstruction losses used as regularization to normalize the interference of non-speaker-related information. Moreover, we employ a hybrid learning strategy for learning parameters of the deep neural networks: i.e., local yet greedy layerwise unsupervised pretraining for initialization and global supervised learning for the ultimate discriminative goal. With four Linguistic Data Consortium (LDC) benchmarks and two non-English corpora, we demonstrate that our overcomplete representation is robust in characterizing various speakers, no matter whether their utterances have been used in training our DNA, and highly insensitive to text and languages spoken. Extensive comparative studies suggest that our approach yields favorite results in speaker verification and segmentation. Finally, we discuss several issues concerning our proposed approach.

  13. Neural Basis of Reinforcement Learning and Decision Making

    Science.gov (United States)

    Lee, Daeyeol; Seo, Hyojung; Jung, Min Whan

    2012-01-01

    Reinforcement learning is an adaptive process in which an animal utilizes its previous experience to improve the outcomes of future choices. Computational theories of reinforcement learning play a central role in the newly emerging areas of neuroeconomics and decision neuroscience. In this framework, actions are chosen according to their value functions, which describe how much future reward is expected from each action. Value functions can be adjusted not only through reward and penalty, but also by the animal’s knowledge of its current environment. Studies have revealed that a large proportion of the brain is involved in representing and updating value functions and using them to choose an action. However, how the nature of a behavioral task affects the neural mechanisms of reinforcement learning remains incompletely understood. Future studies should uncover the principles by which different computational elements of reinforcement learning are dynamically coordinated across the entire brain. PMID:22462543

  14. Outsmarting neural networks: an alternative paradigm for machine learning

    Energy Technology Data Exchange (ETDEWEB)

    Protopopescu, V.; Rao, N.S.V.

    1996-10-01

    We address three problems in machine learning, namely: (i) function learning, (ii) regression estimation, and (iii) sensor fusion, in the Probably and Approximately Correct (PAC) framework. We show that, under certain conditions, one can reduce the three problems above to the regression estimation. The latter is usually tackled with artificial neural networks (ANNs) that satisfy the PAC criteria, but have high computational complexity. We propose several computationally efficient PAC alternatives to ANNs to solve the regression estimation. Thereby we also provide efficient PAC solutions to the function learning and sensor fusion problems. The approach is based on cross-fertilizing concepts and methods from statistical estimation, nonlinear algorithms, and the theory of computational complexity, and is designed as part of a new, coherent paradigm for machine learning.

  15. Stochastic sensitivity analysis and Langevin simulation for neural network learning

    International Nuclear Information System (INIS)

    Koda, Masato

    1997-01-01

    A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method

  16. Image Classification, Deep Learning and Convolutional Neural Networks : A Comparative Study of Machine Learning Frameworks

    OpenAIRE

    Airola, Rasmus; Hager, Kristoffer

    2017-01-01

    The use of machine learning and specifically neural networks is a growing trend in software development, and has grown immensely in the last couple of years in the light of an increasing need to handle big data and large information flows. Machine learning has a broad area of application, such as human-computer interaction, predicting stock prices, real-time translation, and self driving vehicles. Large companies such as Microsoft and Google have already implemented machine learning in some o...

  17. managing tertiary institutions for the promotion of lifelong learning

    African Journals Online (AJOL)

    Global Journal

    KEYWORDS: Managing, tertiary institutions, promotion, lifelong learning. INTRODUCTION ... science, medicine and technology towards the ... different environments, whether formal, informal ... schools considering that each day gives birth to.

  18. Investigation of Using Analytics in Promoting Mobile Learning Support

    Science.gov (United States)

    Visali, Videhi; Swami, Niraj

    2013-01-01

    Learning analytics can promote pedagogically informed use of learner data, which can steer the progress of technology mediated learning across several learning contexts. This paper presents the application of analytics to a mobile learning solution and demonstrates how a pedagogical sense was inferred from the data. Further, this inference was…

  19. Dopamine prediction errors in reward learning and addiction: from theory to neural circuitry

    Science.gov (United States)

    Keiflin, Ronald; Janak, Patricia H.

    2015-01-01

    Summary Midbrain dopamine (DA) neurons are proposed to signal reward prediction error (RPE), a fundamental parameter in associative learning models. This RPE hypothesis provides a compelling theoretical framework for understanding DA function in reward learning and addiction. New studies support a causal role for DA-mediated RPE activity in promoting learning about natural reward; however, this question has not been explicitly tested in the context of drug addiction. In this review, we integrate theoretical models with experimental findings on the activity of DA systems, and on the causal role of specific neuronal projections and cell types, to provide a circuit-based framework for probing DA-RPE function in addiction. By examining error-encoding DA neurons in the neural network in which they are embedded, hypotheses regarding circuit-level adaptations that possibly contribute to pathological error-signaling and addiction can be formulated and tested. PMID:26494275

  20. Dopamine Prediction Errors in Reward Learning and Addiction: From Theory to Neural Circuitry.

    Science.gov (United States)

    Keiflin, Ronald; Janak, Patricia H

    2015-10-21

    Midbrain dopamine (DA) neurons are proposed to signal reward prediction error (RPE), a fundamental parameter in associative learning models. This RPE hypothesis provides a compelling theoretical framework for understanding DA function in reward learning and addiction. New studies support a causal role for DA-mediated RPE activity in promoting learning about natural reward; however, this question has not been explicitly tested in the context of drug addiction. In this review, we integrate theoretical models with experimental findings on the activity of DA systems, and on the causal role of specific neuronal projections and cell types, to provide a circuit-based framework for probing DA-RPE function in addiction. By examining error-encoding DA neurons in the neural network in which they are embedded, hypotheses regarding circuit-level adaptations that possibly contribute to pathological error signaling and addiction can be formulated and tested. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Using Cooperative Structures to Promote Deep Learning

    Science.gov (United States)

    Millis, Barbara J.

    2014-01-01

    The author explores concrete ways to help students learn more and have fun doing it while they support each other's learning. The article specifically shows the relationships between cooperative learning and deep learning. Readers will become familiar with the tenets of cooperative learning and its power to enhance learning--even more so when…

  2. Do Collaborative Exams Really Promote Learning?

    Science.gov (United States)

    Miller, Scott; James, C. Renee

    2018-01-01

    Collaborative, two-stage exams are becoming more popular in physics and astronomy courses, and their supposed benefits in terms of collaborative learning have been reported in the field of physics. In a collaborative, two-stage exam, students first complete an exam individually. Once that portion of the exam is over, students then retake all or part of the exam within a group, where they are able to discuss the questions with their peers and arrive at a common answer. While there are a number of papers that discuss the purported benefits of this method from a collaborative point of view, few, if any discuss the actual benefits in terms of student learning. One paper found that when students were presented with previous exam questions a few weeks later, they performed better on questions covered previously in the group portion of the exam compared to similar questions which were tested but not part of the group portion. But, when students were retested on exam questions which were administered earlier, roughly six to seven weeks beforehand, no difference was found in their performance on the two sets of questions.We present preliminary findings comparing student performance levels on multiple sets of exam questions administered to students in an introductory astronomy course where two-stage exams are administered. Questions were administered first in an exam during the course of the semester, then again during a final exam. During the semester exams, one set of questions was also contained within the group portion of the exam, while questions similar in concept and difficulty were not. A comparison of student performance on these two sets of questions are compared to evaluate the usefulness of collaborative exams to promote learning.

  3. A learning algorithm for oscillatory cellular neural networks.

    Science.gov (United States)

    Ho, C Y.; Kurokawa, H

    1999-07-01

    We present a cellular type oscillatory neural network for temporal segregation of stationary input patterns. The model comprises an array of locally connected neural oscillators with connections limited to a 4-connected neighborhood. The architecture is reminiscent of the well-known cellular neural network that consists of local connection for feature extraction. By means of a novel learning rule and an initialization scheme, global synchronization can be accomplished without incurring any erroneous synchrony among uncorrelated objects. Each oscillator comprises two mutually coupled neurons, and neurons share a piecewise-linear activation function characteristic. The dynamics of traditional oscillatory models is simplified by using only one plastic synapse, and the overall complexity for hardware implementation is reduced. Based on the connectedness of image segments, it is shown that global synchronization and desynchronization can be achieved by means of locally connected synapses, and this opens up a tremendous application potential for the proposed architecture. Furthermore, by using special grouping synapses it is demonstrated that temporal segregation of overlapping gray-level and color segments can also be achieved. Finally, simulation results show that the learning rule proposed circumvents the problem of component mismatches, and hence facilitates a large-scale integration.

  4. Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.

    Science.gov (United States)

    Walter, Florian; Röhrbein, Florian; Knoll, Alois

    2015-12-01

    The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Neural Correlates of Threat Perception: Neural Equivalence of Conspecific and Heterospecific Mobbing Calls Is Learned

    Science.gov (United States)

    Avey, Marc T.; Hoeschele, Marisa; Moscicki, Michele K.; Bloomfield, Laurie L.; Sturdy, Christopher B.

    2011-01-01

    Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise [1]–[2]. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators [3]. Mobbing calls produced in response to smaller, higher-threat predators contain more “D” notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators [4]. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned. PMID:21909363

  6. Neural correlates of threat perception: neural equivalence of conspecific and heterospecific mobbing calls is learned.

    Science.gov (United States)

    Avey, Marc T; Hoeschele, Marisa; Moscicki, Michele K; Bloomfield, Laurie L; Sturdy, Christopher B

    2011-01-01

    Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators. Mobbing calls produced in response to smaller, higher-threat predators contain more "D" notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned.

  7. Neural correlates of threat perception: neural equivalence of conspecific and heterospecific mobbing calls is learned.

    Directory of Open Access Journals (Sweden)

    Marc T Avey

    Full Text Available Songbird auditory areas (i.e., CMM and NCM are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators. Mobbing calls produced in response to smaller, higher-threat predators contain more "D" notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned.

  8. Deep learning for steganalysis via convolutional neural networks

    Science.gov (United States)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  9. A stochastic learning algorithm for layered neural networks

    International Nuclear Information System (INIS)

    Bartlett, E.B.; Uhrig, R.E.

    1992-01-01

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given

  10. Relabeling exchange method (REM) for learning in neural networks

    Science.gov (United States)

    Wu, Wen; Mammone, Richard J.

    1994-02-01

    The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

  11. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.

    Science.gov (United States)

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

  12. Comparison between extreme learning machine and wavelet neural networks in data classification

    Science.gov (United States)

    Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2017-03-01

    Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.

  13. Neural regeneration protein is a novel chemoattractive and neuronal survival-promoting factor

    International Nuclear Information System (INIS)

    Gorba, Thorsten; Bradoo, Privahini; Antonic, Ana; Marvin, Keith; Liu, Dong-Xu; Lobie, Peter E.; Reymann, Klaus G.; Gluckman, Peter D.; Sieg, Frank

    2006-01-01

    Neurogenesis and neuronal migration are the prerequisites for the development of the central nervous system. We have identified a novel rodent gene encoding for a neural regeneration protein (NRP) with an activity spectrum similar to the chemokine stromal-derived factor (SDF)-1, but with much greater potency. The Nrp gene is encoded as a forward frameshift to the hypothetical alkylated DNA repair protein AlkB. The predicted protein sequence of NRP contains domains with homology to survival-promoting peptide (SPP) and the trefoil protein TFF-1. The Nrp gene is first expressed in neural stem cells and expression continues in glial lineages. Recombinant NRP and NRP-derived peptides possess biological activities including induction of neural migration and proliferation, promotion of neuronal survival, enhancement of neurite outgrowth and promotion of neuronal differentiation from neural stem cells. NRP exerts its effect on neuronal survival by phosphorylation of the ERK1/2 and Akt kinases, whereas NRP stimulation of neural migration depends solely on p44/42 MAP kinase activity. Taken together, the expression profile of Nrp, the existence in its predicted protein structure of domains with similarities to known neuroprotective and migration-inducing factors and the high potency of NRP-derived synthetic peptides acting in femtomolar concentrations suggest it to be a novel gene of relevance in cellular and developmental neurobiology

  14. Natural lecithin promotes neural network complexity and activity

    Science.gov (United States)

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-01-01

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called “essential” fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications. PMID:27228907

  15. Natural lecithin promotes neural network complexity and activity.

    Science.gov (United States)

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-05-27

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called "essential" fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications.

  16. Assessing Preschool Teachers' Practices to Promote Self-Regulated Learning

    Science.gov (United States)

    Adagideli, Fahretdin Hasan; Saraç, Seda; Ader, Engin

    2015-01-01

    Recent research reveals that in preschool years, through pedagogical interventions, preschool teachers can and should promote self-regulated learning. The main aim of this study is to develop a self-report instrument to assess preschool teachers' practices to promote self-regulated learning. A pool of 50 items was recruited through literature…

  17. Neural Correlates of Success and Failure Signals During Neurofeedback Learning.

    Science.gov (United States)

    Radua, Joaquim; Stoica, Teodora; Scheinost, Dustin; Pittenger, Christopher; Hampson, Michelle

    2018-05-15

    Feedback-driven learning, observed across phylogeny and of clear adaptive value, is frequently operationalized in simple operant conditioning paradigms, but it can be much more complex, driven by abstract representations of success and failure. This study investigates the neural processes involved in processing success and failure during feedback learning, which are not well understood. Data analyzed were acquired during a multisession neurofeedback experiment in which ten participants were presented with, and instructed to modulate, the activity of their orbitofrontal cortex with the aim of decreasing their anxiety. We assessed the regional blood-oxygenation-level-dependent response to the individualized neurofeedback signals of success and failure across twelve functional runs acquired in two different magnetic resonance sessions in each of ten individuals. Neurofeedback signals of failure correlated early during learning with deactivation in the precuneus/posterior cingulate and neurofeedback signals of success correlated later during learning with deactivation in the medial prefrontal/anterior cingulate cortex. The intensity of the latter deactivations predicted the efficacy of the neurofeedback intervention in the reduction of anxiety. These findings indicate a role for regulation of the default mode network during feedback learning, and suggest a higher sensitivity to signals of failure during the early feedback learning and to signals of success subsequently. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    Science.gov (United States)

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  19. Noise-driven manifestation of learning in mature neural networks

    International Nuclear Information System (INIS)

    Monterola, Christopher; Saloma, Caesar

    2002-01-01

    We show that the generalization capability of a mature thresholding neural network to process above-threshold disturbances in a noise-free environment is extended to subthreshold disturbances by ambient noise without retraining. The ability to benefit from noise is intrinsic and does not have to be learned separately. Nonlinear dependence of sensitivity with noise strength is significantly narrower than in individual threshold systems. Noise has a minimal effect on network performance for above-threshold signals. We resolve two seemingly contradictory responses of trained networks to noise--their ability to benefit from its presence and their robustness against noisy strong disturbances

  20. Supervised learning of probability distributions by neural networks

    Science.gov (United States)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  1. Towards Ways to Promote Interaction in Digital Learning Spaces

    OpenAIRE

    Olsson , Hanna ,

    2012-01-01

    Part 7: Doctoral Student Papers; International audience; Social learning is dependent on social interactions. I am exploring ways to promote interaction in Digital Learning Spaces. As theoretical framework I use the types of interaction between learner, instructor and content. That learners feel isolated and lonely in DLSs is a problem which comes at high cost for social learning. My aim is to promote social interaction by offering the edentity: a system for making participants visible to eac...

  2. A novel Bayesian learning method for information aggregation in modular neural networks

    DEFF Research Database (Denmark)

    Wang, Pan; Xu, Lida; Zhou, Shang-Ming

    2010-01-01

    Modular neural network is a popular neural network model which has many successful applications. In this paper, a sequential Bayesian learning (SBL) is proposed for modular neural networks aiming at efficiently aggregating the outputs of members of the ensemble. The experimental results on eight...... benchmark problems have demonstrated that the proposed method can perform information aggregation efficiently in data modeling....

  3. Post-learning hippocampal dynamics promote preferential retention of rewarding events

    Science.gov (United States)

    Gruber, Matthias J.; Ritchey, Maureen; Wang, Shao-Fang; Doss, Manoj K.; Ranganath, Charan

    2016-01-01

    Reward motivation is known to modulate memory encoding, and this effect depends on interactions between the substantia nigra/ ventral tegmental area complex (SN/VTA) and the hippocampus. It is unknown, however, whether these interactions influence offline neural activity in the human brain that is thought to promote memory consolidation. Here, we used functional magnetic resonance imaging (fMRI) to test the effect of reward motivation on post-learning neural dynamics and subsequent memory for objects that were learned in high- or low-reward motivation contexts. We found that post-learning increases in resting-state functional connectivity between the SN/VTA and hippocampus predicted preferential retention of objects that were learned in high-reward contexts. In addition, multivariate pattern classification revealed that hippocampal representations of high-reward contexts were preferentially reactivated during post-learning rest, and the number of hippocampal reactivations was predictive of preferential retention of items learned in high-reward contexts. These findings indicate that reward motivation alters offline post-learning dynamics between the SN/VTA and hippocampus, providing novel evidence for a potential mechanism by which reward could influence memory consolidation. PMID:26875624

  4. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  5. Learning Orthographic Structure With Sequential Generative Neural Networks.

    Science.gov (United States)

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  6. Construction of Neural Networks for Realization of Localized Deep Learning

    Directory of Open Access Journals (Sweden)

    Charles K. Chui

    2018-05-01

    Full Text Available The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines. However, theoretical development of deep learning is still at its infancy. The objective of this paper is to introduce a deep neural network (also called deep-net approach to localized manifold learning, with each hidden layer endowed with a specific learning task. For the purpose of illustrations, we only focus on deep-nets with three hidden layers, with the first layer for dimensionality reduction, the second layer for bias reduction, and the third layer for variance reduction. A feedback component is also designed to deal with outliers. The main theoretical result in this paper is the order O(m-2s/(2s+d of approximation of the regression function with regularity s, in terms of the number m of sample points, where the (unknown manifold dimension d replaces the dimension D of the sampling (Euclidean space for shallow nets.

  7. Transfer Learning with Convolutional Neural Networks for SAR Ship Recognition

    Science.gov (United States)

    Zhang, Di; Liu, Jia; Heng, Wang; Ren, Kaijun; Song, Junqiang

    2018-03-01

    Ship recognition is the backbone of marine surveillance systems. Recent deep learning methods, e.g. Convolutional Neural Networks (CNNs), have shown high performance for optical images. Learning CNNs, however, requires a number of annotated samples to estimate numerous model parameters, which prevents its application to Synthetic Aperture Radar (SAR) images due to the limited annotated training samples. Transfer learning has been a promising technique for applications with limited data. To this end, a novel SAR ship recognition method based on CNNs with transfer learning has been developed. In this work, we firstly start with a CNNs model that has been trained in advance on Moving and Stationary Target Acquisition and Recognition (MSTAR) database. Next, based on the knowledge gained from this image recognition task, we fine-tune the CNNs on a new task to recognize three types of ships in the OpenSARShip database. The experimental results show that our proposed approach can obviously increase the recognition rate comparing with the result of merely applying CNNs. In addition, compared to existing methods, the proposed method proves to be very competitive and can learn discriminative features directly from training data instead of requiring pre-specification or pre-selection manually.

  8. A neural fuzzy controller learning by fuzzy error propagation

    Science.gov (United States)

    Nauck, Detlef; Kruse, Rudolf

    1992-01-01

    In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.

  9. Neural dynamics of learning sound-action associations.

    Directory of Open Access Journals (Sweden)

    Adam McNamara

    Full Text Available A motor component is pre-requisite to any communicative act as one must inherently move to communicate. To learn to make a communicative act, the brain must be able to dynamically associate arbitrary percepts to the neural substrate underlying the pre-requisite motor activity. We aimed to investigate whether brain regions involved in complex gestures (ventral pre-motor cortex, Brodmann Area 44 were involved in mediating association between novel abstract auditory stimuli and novel gestural movements. In a functional resonance imaging (fMRI study we asked participants to learn associations between previously unrelated novel sounds and meaningless gestures inside the scanner. We use functional connectivity analysis to eliminate the often present confound of 'strategic covert naming' when dealing with BA44 and to rule out effects of non-specific reductions in signal. Brodmann Area 44, a region incorporating Broca's region showed strong, bilateral, negative correlation of BOLD (blood oxygen level dependent response with learning of sound-action associations during data acquisition. Left-inferior-parietal-lobule (l-IPL and bilateral loci in and around visual area V5, right-orbital-frontal-gyrus, right-hippocampus, left-para-hippocampus, right-head-of-caudate, right-insula and left-lingual-gyrus also showed decreases in BOLD response with learning. Concurrent with these decreases in BOLD response, an increasing connectivity between areas of the imaged network as well as the right-middle-frontal-gyrus with rising learning performance was revealed by a psychophysiological interaction (PPI analysis. The increasing connectivity therefore occurs within an increasingly energy efficient network as learning proceeds. Strongest learning related connectivity between regions was found when analysing BA44 and l-IPL seeds. The results clearly show that BA44 and l-IPL is dynamically involved in linking gesture and sound and therefore provides evidence that one of

  10. Learning sequential control in a Neural Blackboard Architecture for in situ concept reasoning

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, Frank; Besold, Tarek R.; Lamb, Luis; Serafini, Luciano; Tabor, Whitney

    2016-01-01

    Simulations are presented and discussed of learning sequential control in a Neural Blackboard Architecture (NBA) for in situ concept-based reasoning. Sequential control is learned in a reservoir network, consisting of columns with neural circuits. This allows the reservoir to control the dynamics of

  11. Spaced Learning Enhances Subsequent Recognition Memory by Reducing Neural Repetition Suppression

    Science.gov (United States)

    Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi

    2011-01-01

    Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half…

  12. Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies

    Science.gov (United States)

    Peng, Yefei

    2010-01-01

    An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…

  13. Collegewide Promotion of E-Learning/Active Learning and Faculty Development

    Science.gov (United States)

    Ogawa, Nobuyuki; Shimizu, Akira

    2016-01-01

    Japanese National Institutes of Technology have revealed a plan to strongly promote e-Learning and active learning under the common schematization of education in over 50 campuses nationwide. Our e-Learning and ICT-driven education practiced for more than fifteen years were highly evaluated, and is playing a leading role in promoting e-Learning…

  14. Genetic deletion of Rnd3 in neural stem cells promotes proliferation via upregulation of Notch signaling.

    Science.gov (United States)

    Dong, Huimin; Lin, Xi; Li, Yuntao; Hu, Ronghua; Xu, Yang; Guo, Xiaojie; La, Qiong; Wang, Shun; Fang, Congcong; Guo, Junli; Li, Qi; Mao, Shanping; Liu, Baohui

    2017-10-31

    Rnd3, a Rho GTPase, is involved in the inhibition of actin cytoskeleton dynamics through the Rho kinase-dependent signaling pathway. We previously demonstrated that mice with genetic deletion of Rnd3 developed a markedly larger brain compared with wild-type mice. Here, we demonstrate that Rnd3 knockout mice developed an enlarged subventricular zone, and we identify a novel role for Rnd3 as an inhibitor of Notch signaling in neural stem cells. Rnd3 deficiency, both in vivo and in vitro , resulted in increased levels of Notch intracellular domain protein. This led to enhanced Notch signaling and promotion of aberrant neural stem cell growth, thereby resulting in a larger subventricular zone and a markedly larger brain. Inhibition of Notch activity abrogated this aberrant neural stem cell growth.

  15. Neural modularity helps organisms evolve to learn new skills without forgetting old skills.

    Science.gov (United States)

    Ellefsen, Kai Olav; Mouret, Jean-Baptiste; Clune, Jeff

    2015-04-01

    A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to

  16. Neural modularity helps organisms evolve to learn new skills without forgetting old skills.

    Directory of Open Access Journals (Sweden)

    Kai Olav Ellefsen

    2015-04-01

    Full Text Available A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand. To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1 that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2 that one benefit of the modularity ubiquitous in the brains of natural animals

  17. Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills

    Science.gov (United States)

    Ellefsen, Kai Olav; Mouret, Jean-Baptiste; Clune, Jeff

    2015-01-01

    A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to

  18. Three-Dimensional-Bioprinted Dopamine-Based Matrix for Promoting Neural Regeneration.

    Science.gov (United States)

    Zhou, Xuan; Cui, Haitao; Nowicki, Margaret; Miao, Shida; Lee, Se-Jun; Masood, Fahed; Harris, Brent T; Zhang, Lijie Grace

    2018-03-14

    Central nerve repair and regeneration remain challenging problems worldwide, largely because of the extremely weak inherent regenerative capacity and accompanying fibrosis of native nerves. Inadequate solutions to the unmet needs for clinical therapeutics encourage the development of novel strategies to promote nerve regeneration. Recently, 3D bioprinting techniques, as one of a set of valuable tissue engineering technologies, have shown great promise toward fabricating complex and customizable artificial tissue scaffolds. Gelatin methacrylate (GelMA) possesses excellent biocompatible and biodegradable properties because it contains many arginine-glycine-aspartic acids (RGD) and matrix metalloproteinase sequences. Dopamine (DA), as an essential neurotransmitter, has proven effective in regulating neuronal development and enhancing neurite outgrowth. In this study, GelMA-DA neural scaffolds with hierarchical structures were 3D-fabricated using our custom-designed stereolithography-based printer. DA was functionalized on GelMA to synthesize a biocompatible printable ink (GelMA-DA) for improving neural differentiation. Additionally, neural stem cells (NSCs) were employed as the primary cell source for these scaffolds because of their ability to terminally differentiate into a variety of cell types including neurons, astrocytes, and oligodendrocytes. The resultant GelMA-DA scaffolds exhibited a highly porous and interconnected 3D environment, which is favorable for supporting NSC growth. Confocal microscopy analysis of neural differentiation demonstrated that a distinct neural network was formed on the GelMA-DA scaffolds. In particular, the most significant improvements were the enhanced neuron gene expression of TUJ1 and MAP2. Overall, our results demonstrated that 3D-printed customizable GelMA-DA scaffolds have a positive role in promoting neural differentiation, which is promising for advancing nerve repair and regeneration in the future.

  19. Learning, memory, and the role of neural network architecture.

    Directory of Open Access Journals (Sweden)

    Ann M Hermundstad

    2011-06-01

    Full Text Available The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  20. Using USB Keys to Promote Mobile Learning

    Directory of Open Access Journals (Sweden)

    Marilyne Rosselle

    2009-07-01

    Full Text Available M-learning (i.e. mobile learning is a field of e-learning that provides learners learning environments using mobile technology. In this context, learning can take place anywhere and anytime, in open and distance learning. Depending on the type of technology it may be done through software called nomadic (i.e. prepared to mobility. Among these technologies, there are those composed of digital interfaces and with autonomy of treatment: Smartphone, PDA, calculator and even mp3 key. In this article we propose to take into account storage devices as mobile technologies. Our focus was on the USB key. We present a procedure to test whether a learning environment embarked on a USB key can be described as nomadic or not. This procedure has been tested on a sample of three ILE (Interactive Learning Environment. This approach has allowed us to define criteria of nomadism, criteria which were then included in the design of a synchronous Weblog on USB key.

  1. Hyperexpressed netrin-1promoted neural stem cells migration in mice after focal cerebral ischemia

    OpenAIRE

    Haiyan Lu; Xiaoyan Song; Feng Wang; Guodong Wang; Yuncheng Wu; Qiaoshu Wang; Yongting Wang; Guoyuan Yang; Zhijun Zhang

    2016-01-01

    Endogenous Netrin-1 (NT-1) protein was significantly increased after cerebral ischemia, which may participate in the repair after transient cerebral ischemic injury. In this work, we explored whether NT-1 can be steadily overexpressed by adeno-associated virus (AAV) and the exogenous NT-1 can promote neural stem cells migration from the subventricular zone (SVZ) region after cerebral ischemia. Adult CD-1 mice were injected stereotacticly with AAV carrying NT-1 gene (AAV-NT-1). Mice underwent ...

  2. Oscillations, neural computations and learning during wake and sleep.

    Science.gov (United States)

    Penagos, Hector; Varela, Carmen; Wilson, Matthew A

    2017-06-01

    Learning and memory theories consider sleep and the reactivation of waking hippocampal neural patterns to be crucial for the long-term consolidation of memories. Here we propose that precisely coordinated representations across brain regions allow the inference and evaluation of causal relationships to train an internal generative model of the world. This training starts during wakefulness and strongly benefits from sleep because its recurring nested oscillations may reflect compositional operations that facilitate a hierarchical processing of information, potentially including behavioral policy evaluations. This suggests that an important function of sleep activity is to provide conditions conducive to general inference, prediction and insight, which contribute to a more robust internal model that underlies generalization and adaptive behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Neural architecture design based on extreme learning machine.

    Science.gov (United States)

    Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis

    2013-12-01

    Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Supervised learning in spiking neural networks with FORCE training.

    Science.gov (United States)

    Nicola, Wilten; Clopath, Claudia

    2017-12-20

    Populations of neurons display an extraordinary diversity in the behaviors they affect and display. Machine learning techniques have recently emerged that allow us to create networks of model neurons that display behaviors of similar complexity. Here we demonstrate the direct applicability of one such technique, the FORCE method, to spiking neural networks. We train these networks to mimic dynamical systems, classify inputs, and store discrete sequences that correspond to the notes of a song. Finally, we use FORCE training to create two biologically motivated model circuits. One is inspired by the zebra finch and successfully reproduces songbird singing. The second network is motivated by the hippocampus and is trained to store and replay a movie scene. FORCE trained networks reproduce behaviors comparable in complexity to their inspired circuits and yield information not easily obtainable with other techniques, such as behavioral responses to pharmacological manipulations and spike timing statistics.

  5. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    Science.gov (United States)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  6. Promotion and the Scholarship of Teaching and Learning

    Science.gov (United States)

    Vardi, Iris; Quin, Robyn

    2011-01-01

    The move toward recognizing teaching academics has resulted in the Scholarship of Teaching and Learning (SoTL) gaining a greater prominence within the academy, particularly through the academic promotions system. With several Australian universities now providing opportunities for teaching staff who do not engage in research to be promoted, it is…

  7. Arctigenin protects against neuronal hearing loss by promoting neural stem cell survival and differentiation.

    Science.gov (United States)

    Huang, Xinghua; Chen, Mo; Ding, Yan; Wang, Qin

    2017-03-01

    Neuronal hearing loss has become a prevalent health problem. This study focused on the function of arctigenin (ARC) in promoting survival and neuronal differentiation of mouse cochlear neural stem cells (NSCs), and its protection against gentamicin (GMC) induced neuronal hearing loss. Mouse cochlea was used to isolate NSCs, which were subsequently cultured in vitro. The effects of ARC on NSC survival, neurosphere formation, differentiation of NSCs, neurite outgrowth, and neural excitability in neuronal network in vitro were examined. Mechanotransduction ability demonstrated by intact cochlea, auditory brainstem response (ABR), and distortion product optoacoustic emissions (DPOAE) amplitude in mice were measured to evaluate effects of ARC on GMC-induced neuronal hearing loss. ARC increased survival, neurosphere formation, neuron differentiation of NSCs in mouse cochlear in vitro. ARC also promoted the outgrowth of neurites, as well as neural excitability of the NSC-differentiated neuron culture. Additionally, ARC rescued mechanotransduction capacity, restored the threshold shifts of ABR and DPOAE in our GMC ototoxicity murine model. This study supports the potential therapeutic role of ARC in promoting both NSCs proliferation and differentiation in vitro to functional neurons, thus supporting its protective function in the therapeutic treatment of neuropathic hearing loss in vivo. © 2017 Wiley Periodicals, Inc.

  8. Sensorimotor learning biases choice behavior: a learning neural field model for decision making.

    Directory of Open Access Journals (Sweden)

    Christian Klaes

    Full Text Available According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action

  9. Random neural Q-learning for obstacle avoidance of a mobile robot in unknown environments

    Directory of Open Access Journals (Sweden)

    Jing Yang

    2016-07-01

    Full Text Available The article presents a random neural Q-learning strategy for the obstacle avoidance problem of an autonomous mobile robot in unknown environments. In the proposed strategy, two independent modules, namely, avoidance without considering the target and goal-seeking without considering obstacles, are first trained using the proposed random neural Q-learning algorithm to obtain their best control policies. Then, the two trained modules are combined based on a switching function to realize the obstacle avoidance in unknown environments. For the proposed random neural Q-learning algorithm, a single-hidden layer feedforward network is used to approximate the Q-function to estimate the Q-value. The parameters of the single-hidden layer feedforward network are modified using the recently proposed neural algorithm named the online sequential version of extreme learning machine, where the parameters of the hidden nodes are assigned randomly and the sample data can come one by one. However, different from the original online sequential version of extreme learning machine algorithm, the initial output weights are estimated subjected to quadratic inequality constraint to improve the convergence speed. Finally, the simulation results demonstrate that the proposed random neural Q-learning strategy can successfully solve the obstacle avoidance problem. Also, the higher learning efficiency and better generalization ability are achieved by the proposed random neural Q-learning algorithm compared with the Q-learning based on the back-propagation method.

  10. PROMOTING MEANINGFUL LEARNING THROUGH CREATE-SHARE-COLLABORATE

    OpenAIRE

    Sailin, Siti Nazuar; Mahmor, Noor Aida

    2017-01-01

    Students in this 21st century are required to acquire these 4C skills: Critical thinking, Communication, Collaboration and Creativity. These skills can be integrated in the teaching and learning through innovative teaching that promotes active and meaningful learning. One way of integrating these skills is through collaborative knowledge creation and sharing. This paper providesan example of meaningful teaching and learning activities designed within the Create-Share-Collaborate instructional...

  11. MANF Promotes Differentiation and Migration of Neural Progenitor Cells with Potential Neural Regenerative Effects in Stroke

    DEFF Research Database (Denmark)

    Tseng, Kuan-Yin; Anttila, Jenni E; Khodosevich, Konstantin

    2018-01-01

    die shortly after injury or are unable to arrive at the infarct boundary. In this study, we demonstrate for the first time that endogenous mesencephalic astrocyte-derived neurotrophic factor (MANF) protects NSCs against oxygen-glucose-deprivation-induced injury and has a crucial role in regulating NPC...... migration. In NSC cultures, MANF protein administration did not affect growth of cells but triggered neuronal and glial differentiation, followed by activation of STAT3. In SVZ explants, MANF overexpression facilitated cell migration and activated the STAT3 and ERK1/2 pathway. Using a rat model of cortical...... stroke, intracerebroventricular injections of MANF did not affect cell proliferation in the SVZ, but promoted migration of doublecortin (DCX)+ cells toward the corpus callosum and infarct boundary on day 14 post-stroke. Long-term infusion of MANF into the peri-infarct zone increased the recruitment...

  12. Biomimetic Hybrid Feedback Feedforward Neural-Network Learning Control.

    Science.gov (United States)

    Pan, Yongping; Yu, Haoyong

    2017-06-01

    This brief presents a biomimetic hybrid feedback feedforward neural-network learning control (NNLC) strategy inspired by the human motor learning control mechanism for a class of uncertain nonlinear systems. The control structure includes a proportional-derivative controller acting as a feedback servo machine and a radial-basis-function (RBF) NN acting as a feedforward predictive machine. Under the sufficient constraints on control parameters, the closed-loop system achieves semiglobal practical exponential stability, such that an accurate NN approximation is guaranteed in a local region along recurrent reference trajectories. Compared with the existing NNLC methods, the novelties of the proposed method include: 1) the implementation of an adaptive NN control to guarantee plant states being recurrent is not needed, since recurrent reference signals rather than plant states are utilized as NN inputs, which greatly simplifies the analysis and synthesis of the NNLC and 2) the domain of NN approximation can be determined a priori by the given reference signals, which leads to an easy construction of the RBF-NNs. Simulation results have verified the effectiveness of this approach.

  13. A Telescopic Binary Learning Machine for Training Neural Networks.

    Science.gov (United States)

    Brunato, Mauro; Battiti, Roberto

    2017-03-01

    This paper proposes a new algorithm based on multiscale stochastic local search with binary representation for training neural networks [binary learning machine (BLM)]. We study the effects of neighborhood evaluation strategies, the effect of the number of bits per weight and that of the maximum weight range used for mapping binary strings to real values. Following this preliminary investigation, we propose a telescopic multiscale version of local search, where the number of bits is increased in an adaptive manner, leading to a faster search and to local minima of better quality. An analysis related to adapting the number of bits in a dynamic way is presented. The control on the number of bits, which happens in a natural manner in the proposed method, is effective to increase the generalization performance. The learning dynamics are discussed and validated on a highly nonlinear artificial problem and on real-world tasks in many application domains; BLM is finally applied to a problem requiring either feedforward or recurrent architectures for feedback control.

  14. Uncovering the neural mechanisms underlying learning from tests.

    Directory of Open Access Journals (Sweden)

    Xiaonan L Liu

    Full Text Available People learn better when re-study opportunities are replaced with tests. While researchers have begun to speculate on why testing is superior to study, few studies have directly examined the neural underpinnings of this effect. In this fMRI study, participants engaged in a study phase to learn arbitrary word pairs, followed by a cued recall test (recall second half of pair when cued with first word of pair, re-study of each pair, and finally another cycle of cued recall tests. Brain activation patterns during the first test (recall of the studied pairs predicts performance on the second test. Importantly, while subsequent memory analyses of encoding trials also predict later accuracy, the brain regions involved in predicting later memory success are more extensive for activity during retrieval (testing than during encoding (study. Those additional regions that predict subsequent memory based on their activation at test but not at encoding may be key to understanding the basis of the testing effect.

  15. Memory and learning in a class of neural network models

    International Nuclear Information System (INIS)

    Wallace, D.J.

    1986-01-01

    The author discusses memory and learning properties of the neural network model now identified with Hopfield's work. The model, how it attempts to abstract some key features of the nervous system, and the sense in which learning and memory are identified in the model are described. A brief report is presented on the important role of phase transitions in the model and their implications for memory capacity. The results of numerical simulations obtained using the ICL Distributed Array Processors at Edinburgh are presented. A summary is presented on how the fraction of images which are perfectly stored, depends on the number of nodes and the number of nominal images which one attempts to store using the prescription in Hopfield's paper. Results are presented on the second phase transition in the model, which corresponds to almost total loss of storage capacity as the number of nominal images is increased. Results are given on the performance of a new iterative algorithm for exact storage of up to N images in an N node model

  16. Convolutional neural network with transfer learning for rice type classification

    Science.gov (United States)

    Patel, Vaibhav Amit; Joshi, Manjunath V.

    2018-04-01

    Presently, rice type is identified manually by humans, which is time consuming and error prone. Therefore, there is a need to do this by machine which makes it faster with greater accuracy. This paper proposes a deep learning based method for classification of rice types. We propose two methods to classify the rice types. In the first method, we train a deep convolutional neural network (CNN) using the given segmented rice images. In the second method, we train a combination of a pretrained VGG16 network and the proposed method, while using transfer learning in which the weights of a pretrained network are used to achieve better accuracy. Our approach can also be used for classification of rice grain as broken or fine. We train a 5-class model for classifying rice types using 4000 training images and another 2- class model for the classification of broken and normal rice using 1600 training images. We observe that despite having distinct rice images, our architecture, pretrained on ImageNet data boosts classification accuracy significantly.

  17. Biosignals learning and synthesis using deep neural networks.

    Science.gov (United States)

    Belo, David; Rodrigues, João; Vaz, João R; Pezarat-Correia, Pedro; Gamboa, Hugo

    2017-09-25

    Modeling physiological signals is a complex task both for understanding and synthesize biomedical signals. We propose a deep neural network model that learns and synthesizes biosignals, validated by the morphological equivalence of the original ones. This research could lead the creation of novel algorithms for signal reconstruction in heavily noisy data and source detection in biomedical engineering field. The present work explores the gated recurrent units (GRU) employed in the training of respiration (RESP), electromyograms (EMG) and electrocardiograms (ECG). Each signal is pre-processed, segmented and quantized in a specific number of classes, corresponding to the amplitude of each sample and fed to the model, which is composed by an embedded matrix, three GRU blocks and a softmax function. This network is trained by adjusting its internal parameters, acquiring the representation of the abstract notion of the next value based on the previous ones. The simulated signal was generated by forecasting a random value and re-feeding itself. The resulting generated signals are similar with the morphological expression of the originals. During the learning process, after a set of iterations, the model starts to grasp the basic morphological characteristics of the signal and later their cyclic characteristics. After training, these models' prediction are closer to the signals that trained them, specially the RESP and ECG. This synthesis mechanism has shown relevant results that inspire the use to characterize signals from other physiological sources.

  18. Promoting Learning in Libraries through Information Literacy.

    Science.gov (United States)

    Breivik, Patricia Senn; Ford, Barbara J.

    1993-01-01

    Discusses information literacy and describes activities under the sponsorship of the National Forum on Information Literacy (NFIL) that promotes information literacy in schools and libraries. Activities of member organizations of the NFIL are described, including policy formation, publications, and programs; and the role of the American Library…

  19. MyT1 Counteracts the Neural Progenitor Program to Promote Vertebrate Neurogenesis

    Directory of Open Access Journals (Sweden)

    Francisca F. Vasconcelos

    2016-10-01

    Full Text Available The generation of neurons from neural stem cells requires large-scale changes in gene expression that are controlled to a large extent by proneural transcription factors, such as Ascl1. While recent studies have characterized the differentiation genes activated by proneural factors, less is known on the mechanisms that suppress progenitor cell identity. Here, we show that Ascl1 induces the transcription factor MyT1 while promoting neuronal differentiation. We combined functional studies of MyT1 during neurogenesis with the characterization of its transcriptional program. MyT1 binding is associated with repression of gene transcription in neural progenitor cells. It promotes neuronal differentiation by counteracting the inhibitory activity of Notch signaling at multiple levels, targeting the Notch1 receptor and many of its downstream targets. These include regulators of the neural progenitor program, such as Hes1, Sox2, Id3, and Olig1. Thus, Ascl1 suppresses Notch signaling cell-autonomously via MyT1, coupling neuronal differentiation with repression of the progenitor fate.

  20. Formative assessment promotes learning in undergraduate clinical ...

    African Journals Online (AJOL)

    Introduction. Clinical clerkships, typically situated in environments lacking educational structure, form the backbone of undergraduate medical training. The imperative to develop strategies that enhance learning in this context is apparent. This study explored the impact of longitudinal bedside formative assessment on ...

  1. Robotic Cooperative Learning Promotes Student STEM Interest

    Science.gov (United States)

    Mosley, Pauline; Ardito, Gerald; Scollins, Lauren

    2016-01-01

    The principal purpose of this investigation is to study the effect of robotic cooperative learning methodologies on middle school students' critical thinking, and STEM interest. The semi-experimental inquiry consisted of ninety four six-grade students (forty nine students in the experimental group, forty five students in the control group), chosen…

  2. Smartphones Promote Autonomous Learning in ESL Classrooms

    Science.gov (United States)

    Ramamuruthy, Viji; Rao, Srinivasa

    2015-01-01

    The rapid development of high-technology has caused new inventions of gadgets for all walks of life regardless age. In this rapidly advancing technology era many individuals possess hi-tech gadgets such as laptops, tablets, iPad, android phones and smart phones. Adult learners in higher learning institution especially are fond of using smart…

  3. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  4. Algebraic and adaptive learning in neural control systems

    Science.gov (United States)

    Ferrari, Silvia

    A systematic approach is developed for designing adaptive and reconfigurable nonlinear control systems that are applicable to plants modeled by ordinary differential equations. The nonlinear controller comprising a network of neural networks is taught using a two-phase learning procedure realized through novel techniques for initialization, on-line training, and adaptive critic design. A critical observation is that the gradients of the functions defined by the neural networks must equal corresponding linear gain matrices at chosen operating points. On-line training is based on a dual heuristic adaptive critic architecture that improves control for large, coupled motions by accounting for actual plant dynamics and nonlinear effects. An action network computes the optimal control law; a critic network predicts the derivative of the cost-to-go with respect to the state. Both networks are algebraically initialized based on prior knowledge of satisfactory pointwise linear controllers and continue to adapt on line during full-scale simulations of the plant. On-line training takes place sequentially over discrete periods of time and involves several numerical procedures. A backpropagating algorithm called Resilient Backpropagation is modified and successfully implemented to meet these objectives, without excessive computational expense. This adaptive controller is as conservative as the linear designs and as effective as a global nonlinear controller. The method is successfully implemented for the full-envelope control of a six-degree-of-freedom aircraft simulation. The results show that the on-line adaptation brings about improved performance with respect to the initialization phase during aircraft maneuvers that involve large-angle and coupled dynamics, and parameter variations.

  5. Neural mechanisms of reinforcement learning in unmedicated patients with major depressive disorder.

    Science.gov (United States)

    Rothkirch, Marcus; Tonn, Jonas; Köhler, Stephan; Sterzer, Philipp

    2017-04-01

    According to current concepts, major depressive disorder is strongly related to dysfunctional neural processing of motivational information, entailing impairments in reinforcement learning. While computational modelling can reveal the precise nature of neural learning signals, it has not been used to study learning-related neural dysfunctions in unmedicated patients with major depressive disorder so far. We thus aimed at comparing the neural coding of reward and punishment prediction errors, representing indicators of neural learning-related processes, between unmedicated patients with major depressive disorder and healthy participants. To this end, a group of unmedicated patients with major depressive disorder (n = 28) and a group of age- and sex-matched healthy control participants (n = 30) completed an instrumental learning task involving monetary gains and losses during functional magnetic resonance imaging. The two groups did not differ in their learning performance. Patients and control participants showed the same level of prediction error-related activity in the ventral striatum and the anterior insula. In contrast, neural coding of reward prediction errors in the medial orbitofrontal cortex was reduced in patients. Moreover, neural reward prediction error signals in the medial orbitofrontal cortex and ventral striatum showed negative correlations with anhedonia severity. Using a standard instrumental learning paradigm we found no evidence for an overall impairment of reinforcement learning in medication-free patients with major depressive disorder. Importantly, however, the attenuated neural coding of reward in the medial orbitofrontal cortex and the relation between anhedonia and reduced reward prediction error-signalling in the medial orbitofrontal cortex and ventral striatum likely reflect an impairment in experiencing pleasure from rewarding events as a key mechanism of anhedonia in major depressive disorder. © The Author (2017). Published by Oxford

  6. Adaptive Learning Rule for Hardware-based Deep Neural Networks Using Electronic Synapse Devices

    OpenAIRE

    Lim, Suhwan; Bae, Jong-Ho; Eum, Jai-Ho; Lee, Sungtae; Kim, Chul-Heung; Kwon, Dongseok; Park, Byung-Gook; Lee, Jong-Ho

    2017-01-01

    In this paper, we propose a learning rule based on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network (HW-DNN) using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron net...

  7. Using Recreational Drones to Promote STEM Learning

    Science.gov (United States)

    Olds, S. E.; Dahlman, L. E.; Mooney, M. E.; Russell, R. M.

    2017-12-01

    The popularity of unmanned aerial vehicles (UAVs or drones) as a fun, inexpensive (website (SciEd.ucar.edu/engineering-activities), the activities encompass skills to measure drone payload, flight height, and velocity. Investigations also encourage the use of repeat photography, comparing images from drones and satellites, and creating 3D structure from motion (SfM) models from overlapping photographs. The site also offers general guidance to develop science projects or science fair investigations using Next Generation Science Standards science and engineering practices. To encourage the use of drones in STEM, UNAVCO and NOAA staff, sponsored by ESIP, led two hands-on workshops this summer; a three half-day workshop at the Earth Educator Rendezvous (EER) and a half-day session during the ESIP Educator Workshop. Participants practiced UAV flying skills, experimented with lightweight sensors, and learned about current drone-enhanced research projects. In small groups, they tested existing activities and designed student-focused investigations. Examples of projects include measuring aeromagnetics, developing 3D topographic models, creating vertical profiles over various land-surfaces at different temporal intervals, and developing a multi-semester drone-focused curriculum. This presentation will elaborate upon the workshops, learning materials, and insights.

  8. Learning free energy landscapes using artificial neural networks.

    Science.gov (United States)

    Sidky, Hythem; Whitmer, Jonathan K

    2018-03-14

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  9. Learning free energy landscapes using artificial neural networks

    Science.gov (United States)

    Sidky, Hythem; Whitmer, Jonathan K.

    2018-03-01

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  10. An Innovative Teaching Method To Promote Active Learning: Team-Based Learning

    Science.gov (United States)

    Balasubramanian, R.

    2007-12-01

    Traditional teaching practice based on the textbook-whiteboard- lecture-homework-test paradigm is not very effective in helping students with diverse academic backgrounds achieve higher-order critical thinking skills such as analysis, synthesis, and evaluation. Consequently, there is a critical need for developing a new pedagogical approach to create a collaborative and interactive learning environment in which students with complementary academic backgrounds and learning skills can work together to enhance their learning outcomes. In this presentation, I will discuss an innovative teaching method ('Team-Based Learning (TBL)") which I recently developed at National University of Singapore to promote active learning among students in the environmental engineering program with learning abilities. I implemented this new educational activity in a graduate course. Student feedback indicates that this pedagogical approach is appealing to most students, and promotes active & interactive learning in class. Data will be presented to show that the innovative teaching method has contributed to improved student learning and achievement.

  11. Using evaluation strategically to promote active learning

    DEFF Research Database (Denmark)

    Münster, Marie

    as to grade them. For this purpose it was decided to change one report into a poster including a 15 minute group oral presentation. The oral examination allows for individual assessment of the students, for assessment of conceptual understanding and for learning during the examination. This type of evaluation...... is however very time consuming and a written examination will facilitate a better evaluation of whether the core elements of the course (including the tools used for the two projects) are achieved at an individual level, so it was decided to have a 4 hour written examination instead. Evaluation of conceptual...... understanding was undertaken through more open ended questions. Results: Using a poster instead of a report for one of the projects was found to be very successful. The students used most of their time on discussing and using the tool, and less on reporting, which was the purpose. When asked, they claimed...

  12. Writing Assignments that Promote Active Learning

    Science.gov (United States)

    Narayanan, M.

    2014-12-01

    Encourage students to write a detailed, analytical report correlating classroom discussions to an important historical event or a current event. Motivate students interview an expert from industry on a topic that was discussed in class. Ask the students to submit a report with supporting sketches, drawings, circuit diagrams and graphs. Propose that the students generate a complete a set of reading responses pertaining to an assigned topic. Require each student to bring in one comment or one question about an assigned reading. The assignment should be a recent publication in an appropriate journal. Have the students conduct a web search on an assigned topic. Ask them to generate a set of ideas that can relate to classroom discussions. Provide the students with a study guide. The study guide should provide about 10 or 15 short topics. Quiz the students on one or two of the topics. Encourage the students to design or develop some creative real-world examples based on a chapter discussed or a topic of interest. Require that students originate, develop, support and defend a viewpoint using a specifically assigned material. Make the students practice using or utilizing a set of new technical terms they have encountered in an assigned chapter. Have students develop original examples explaining the different terms. Ask the students to select one important terminology from the previous classroom discussions. Encourage the students to explain why they selected that particular word. Ask them to talk about the importance of the terminology from the point of view of their educational objectives and future career. Angelo, T. A. (1991). Ten easy pieces: Assessing higher learning in four dimensions. In T. A. Angelo (Ed.), Classroom research: Early lessons from success (pp. 17-31). New Directions for Teaching and Learning, No. 46. San Francisco: Jossey-Bass.

  13. Parameter diagnostics of phases and phase transition learning by neural networks

    Science.gov (United States)

    Suchsland, Philippe; Wessel, Stefan

    2018-05-01

    We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.

  14. Promoting Social Change through Service-Learning in the Curriculum

    Science.gov (United States)

    Bowen, Glenn A.

    2014-01-01

    Service-learning is a high-impact pedagogical strategy embraced by higher education institutions. Direct service based on a charity paradigm tends to be the norm, while little attention is paid to social change-oriented service. This article offers suggestions for incorporating social justice education into courses designed to promote social…

  15. Using teacher action research to promote constructivist learning ...

    African Journals Online (AJOL)

    Erna Kinsey

    2. To describe the learning environment of typical classrooms in. South African ... a more teacher-centred approach to more constructivist teaching ap- proaches and ... control over their lives within a framework promoted through action research ... cycles of questioning, planning, implementing, collecting data and reflecting ...

  16. Evaluating a Gender Diversity Workshop to Promote Positive Learning Environments

    Science.gov (United States)

    Burford, James; Lucassen, Mathijs F. G.; Hamilton, Thomas

    2017-01-01

    Drawing on data from an Aotearoa/New Zealand study of more than 230 secondary students, this article evaluates the potential of a 60-min gender diversity workshop to address bullying and promote positive environments for learning. Students completed pre- and postworkshop questionnaires. The authors used descriptive statistics to summarize results…

  17. Using Facebook to Promote Learning: A Case Study

    Science.gov (United States)

    Schoper, Sarah E.; Hill, Aaron R.

    2017-01-01

    A growing body of research is examining the use of social media on college campuses. This study explores the use of one social media outlet, specifically Facebook's closed group feature, in two graduate courses. Findings show that using Facebook can promote student learning. Students used the groups for sharing ideas and support, asking questions,…

  18. Conceptual Tutoring Software for Promoting Deep Learning: A Case Study

    Science.gov (United States)

    Stott, Angela; Hattingh, Annemarie

    2015-01-01

    The paper presents a case study of the use of conceptual tutoring software to promote deep learning of the scientific concept of density among 50 final year pre-service student teachers in a natural sciences course in a South African university. Individually-paced electronic tutoring is potentially an effective way of meeting the students' varied…

  19. Assessing the Potential of Mathematics Textbooks to Promote Deep Learning

    Science.gov (United States)

    Shield, Malcolm; Dole, Shelley

    2013-01-01

    Curriculum documents for mathematics emphasise the importance of promoting depth of knowledge rather than shallow coverage of the curriculum. In this paper, we report on a study that explored the analysis of junior secondary mathematics textbooks to assess their potential to assist in teaching and learning aimed at building and applying deep…

  20. Learning second language vocabulary: neural dissociation of situation-based learning and text-based learning.

    Science.gov (United States)

    Jeong, Hyeonjeong; Sugiura, Motoaki; Sassa, Yuko; Wakusawa, Keisuke; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta

    2010-04-01

    Second language (L2) acquisition necessitates learning and retrieving new words in different modes. In this study, we attempted to investigate the cortical representation of an L2 vocabulary acquired in different learning modes and in cross-modal transfer between learning and retrieval. Healthy participants learned new L2 words either by written translations (text-based learning) or in real-life situations (situation-based learning). Brain activity was then measured during subsequent retrieval of these words. The right supramarginal gyrus and left middle frontal gyrus were involved in situation-based learning and text-based learning, respectively, whereas the left inferior frontal gyrus was activated when learners used L2 knowledge in a mode different from the learning mode. Our findings indicate that the brain regions that mediate L2 memory differ according to how L2 words are learned and used. Copyright 2009 Elsevier Inc. All rights reserved.

  1. Improved Discriminability of Spatiotemporal Neural Patterns in Rat Motor Cortical Areas as Directional Choice Learning Progresses

    Directory of Open Access Journals (Sweden)

    Hongwei eMao

    2015-03-01

    Full Text Available Animals learn to choose a proper action among alternatives to improve their odds of success in food foraging and other activities critical for survival. Through trial-and-error, they learn correct associations between their choices and external stimuli. While a neural network that underlies such learning process has been identified at a high level, it is still unclear how individual neurons and a neural ensemble adapt as learning progresses. In this study, we monitored the activity of single units in the rat medial and lateral agranular (AGm and AGl, respectively areas as rats learned to make a left or right side lever press in response to a left or right side light cue. We noticed that rat movement parameters during the performance of the directional choice task quickly became stereotyped during the first 2-3 days or sessions. But learning the directional choice problem took weeks to occur. Accompanying rats’ behavioral performance adaptation, we observed neural modulation by directional choice in recorded single units. Our analysis shows that ensemble mean firing rates in the cue-on period did not change significantly as learning progressed, and the ensemble mean rate difference between left and right side choices did not show a clear trend of change either. However, the spatiotemporal firing patterns of the neural ensemble exhibited improved discriminability between the two directional choices through learning. These results suggest a spatiotemporal neural coding scheme in a motor cortical neural ensemble that may be responsible for and contributing to learning the directional choice task.

  2. Neural prediction errors reveal a risk-sensitive reinforcement-learning process in the human brain.

    Science.gov (United States)

    Niv, Yael; Edlund, Jeffrey A; Dayan, Peter; O'Doherty, John P

    2012-01-11

    Humans and animals are exquisitely, though idiosyncratically, sensitive to risk or variance in the outcomes of their actions. Economic, psychological, and neural aspects of this are well studied when information about risk is provided explicitly. However, we must normally learn about outcomes from experience, through trial and error. Traditional models of such reinforcement learning focus on learning about the mean reward value of cues and ignore higher order moments such as variance. We used fMRI to test whether the neural correlates of human reinforcement learning are sensitive to experienced risk. Our analysis focused on anatomically delineated regions of a priori interest in the nucleus accumbens, where blood oxygenation level-dependent (BOLD) signals have been suggested as correlating with quantities derived from reinforcement learning. We first provide unbiased evidence that the raw BOLD signal in these regions corresponds closely to a reward prediction error. We then derive from this signal the learned values of cues that predict rewards of equal mean but different variance and show that these values are indeed modulated by experienced risk. Moreover, a close neurometric-psychometric coupling exists between the fluctuations of the experience-based evaluations of risky options that we measured neurally and the fluctuations in behavioral risk aversion. This suggests that risk sensitivity is integral to human learning, illuminating economic models of choice, neuroscientific models of affective learning, and the workings of the underlying neural mechanisms.

  3. Learning characteristics of a space-time neural network as a tether skiprope observer

    Science.gov (United States)

    Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles

    1993-01-01

    The Software Technology Laboratory at the Johnson Space Center is testing a Space Time Neural Network (STNN) for observing tether oscillations present during retrieval of a tethered satellite. Proper identification of tether oscillations, known as 'skiprope' motion, is vital to safe retrieval of the tethered satellite. Our studies indicate that STNN has certain learning characteristics that must be understood properly to utilize this type of neural network for the tethered satellite problem. We present our findings on the learning characteristics including a learning rate versus momentum performance table.

  4. Impaired neurogenesis, learning and memory and low seizure threshold associated with loss of neural precursor cell survivin

    Directory of Open Access Journals (Sweden)

    Eisch Amelia

    2010-01-01

    Full Text Available Abstract Background Survivin is a unique member of the inhibitor of apoptosis protein (IAP family in that it exhibits antiapoptotic properties and also promotes the cell cycle and mediates mitosis as a chromosome passenger protein. Survivin is highly expressed in neural precursor cells in the brain, yet its function there has not been elucidated. Results To examine the role of neural precursor cell survivin, we first showed that survivin is normally expressed in periventricular neurogenic regions in the embryo, becoming restricted postnatally to proliferating and migrating NPCs in the key neurogenic sites, the subventricular zone (SVZ and the subgranular zone (SGZ. We then used a conditional gene inactivation strategy to delete the survivin gene prenatally in those neurogenic regions. Lack of embryonic NPC survivin results in viable, fertile mice (SurvivinCamcre with reduced numbers of SVZ NPCs, absent rostral migratory stream, and olfactory bulb hypoplasia. The phenotype can be partially rescued, as intracerebroventricular gene delivery of survivin during embryonic development increases olfactory bulb neurogenesis, detected postnatally. SurvivinCamcre brains have fewer cortical inhibitory interneurons, contributing to enhanced sensitivity to seizures, and profound deficits in memory and learning. Conclusions The findings highlight the critical role that survivin plays during neural development, deficiencies of which dramatically impact on postnatal neural function.

  5. All-trans retinoic acid promotes neural lineage entry by pluripotent embryonic stem cells via multiple pathways

    Directory of Open Access Journals (Sweden)

    Fang Bo

    2009-07-01

    Full Text Available Abstract Background All-trans retinoic acid (RA is one of the most important morphogens with pleiotropic actions. Its embryonic distribution correlates with neural differentiation in the developing central nervous system. To explore the precise effects of RA on neural differentiation of mouse embryonic stem cells (ESCs, we detected expression of RA nuclear receptors and RA-metabolizing enzymes in mouse ESCs and investigated the roles of RA in adherent monolayer culture. Results Upon addition of RA, cell differentiation was directed rapidly and exclusively into the neural lineage. Conversely, pharmacological interference with RA signaling suppressed this neural differentiation. Inhibition of fibroblast growth factor (FGF signaling did not suppress significantly neural differentiation in RA-treated cultures. Pharmacological interference with extracellular signal-regulated kinase (ERK pathway or activation of Wnt pathway effectively blocked the RA-promoted neural specification. ERK phosphorylation was enhanced in RA-treated cultures at the early stage of differentiation. Conclusion RA can promote neural lineage entry by ESCs in adherent monolayer culture systems. This effect depends on RA signaling and its crosstalk with the ERK and Wnt pathways.

  6. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    Science.gov (United States)

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  7. Maximum entropy methods for extracting the learned features of deep neural networks.

    Science.gov (United States)

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.

  8. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  9. ELeaRNT: Evolutionary Learning of Rich Neural Network Topologies

    National Research Council Canada - National Science Library

    Matteucci, Matteo

    2006-01-01

    In this paper we present ELeaRNT an evolutionary strategy which evolves rich neural network topologies in order to find an optimal domain specific non linear function approximator with a good generalization performance...

  10. Doubly stochastic Poisson processes in artificial neural learning.

    Science.gov (United States)

    Card, H C

    1998-01-01

    This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.

  11. Differences between Neural Activity in Prefrontal Cortex and Striatum during Learning of Novel Abstract Categories

    OpenAIRE

    Antzoulatos, Evan G.; Miller, Earl K.

    2011-01-01

    Learning to classify diverse experiences into meaningful groups, like categories, is fundamental to normal cognition. To understand its neural basis, we simultaneously recorded from multiple electrodes in the lateral prefrontal cortex and dorsal striatum, two interconnected brain structures critical for learning. Each day, monkeys learned to associate novel, abstract dot-based categories with a right vs. left saccade. Early on, when they could acquire specific stimulus-response associations, ...

  12. Breast Cancer Diagnosis using Artificial Neural Networks with Extreme Learning Techniques

    OpenAIRE

    Chandra Prasetyo Utomo; Aan Kardiana; Rika Yuliwulandari

    2014-01-01

    Breast cancer is the second cause of dead among women. Early detection followed by appropriate cancer treatment can reduce the deadly risk. Medical professionals can make mistakes while identifying a disease. The help of technology such as data mining and machine learning can substantially improve the diagnosis accuracy. Artificial Neural Networks (ANN) has been widely used in intelligent breast cancer diagnosis. However, the standard Gradient-Based Back Propagation Artificial Neural Networks...

  13. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung

    2018-02-01

    Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  14. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning.

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung

    2018-02-01

    Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  15. Shaping Early Reorganization of Neural Networks Promotes Motor Function after Stroke

    Science.gov (United States)

    Volz, L. J.; Rehme, A. K.; Michely, J.; Nettekoven, C.; Eickhoff, S. B.; Fink, G. R.; Grefkes, C.

    2016-01-01

    Neural plasticity is a major factor driving cortical reorganization after stroke. We here tested whether repetitively enhancing motor cortex plasticity by means of intermittent theta-burst stimulation (iTBS) prior to physiotherapy might promote recovery of function early after stroke. Functional magnetic resonance imaging (fMRI) was used to elucidate underlying neural mechanisms. Twenty-six hospitalized, first-ever stroke patients (time since stroke: 1–16 days) with hand motor deficits were enrolled in a sham-controlled design and pseudo-randomized into 2 groups. iTBS was administered prior to physiotherapy on 5 consecutive days either over ipsilesional primary motor cortex (M1-stimulation group) or parieto-occipital vertex (control-stimulation group). Hand motor function, cortical excitability, and resting-state fMRI were assessed 1 day prior to the first stimulation and 1 day after the last stimulation. Recovery of grip strength was significantly stronger in the M1-stimulation compared to the control-stimulation group. Higher levels of motor network connectivity were associated with better motor outcome. Consistently, control-stimulated patients featured a decrease in intra- and interhemispheric connectivity of the motor network, which was absent in the M1-stimulation group. Hence, adding iTBS to prime physiotherapy in recovering stroke patients seems to interfere with motor network degradation, possibly reflecting alleviation of post-stroke diaschisis. PMID:26980614

  16. Psychological theory and pedagogical effectiveness: the learning promotion potential framework.

    Science.gov (United States)

    Tomlinson, Peter

    2008-12-01

    After a century of educational psychology, eminent commentators are still lamenting problems besetting the appropriate relating of psychological insights to teaching design, a situation not helped by the persistence of crude assumptions concerning the nature of pedagogical effectiveness. To propose an analytical or meta-theoretical framework based on the concept of learning promotion potential (LPP) as a basis for understanding the basic relationship between psychological insights and teaching strategies, and to draw out implications for psychology-based pedagogical design, development and research. This is a theoretical and meta-theoretical paper relying mainly on conceptual analysis, though also calling on psychological theory and research. Since teaching consists essentially in activity designed to promote learning, it follows that a teaching strategy has the potential in principle to achieve particular kinds of learning gains (LPP) to the extent that it embodies or stimulates the relevant learning processes on the part of learners and enables the teacher's functions of on-line monitoring and assistance for such learning processes. Whether a teaching strategy actually does realize its LPP by way of achieving its intended learning goals depends also on the quality of its implementation, in conjunction with other factors in the situated interaction that teaching always involves. The core role of psychology is to provide well-grounded indication of the nature of such learning processes and the teaching functions that support them, rather than to directly generate particular ways of teaching. A critically eclectic stance towards potential sources of psychological insight is argued for. Applying this framework, the paper proposes five kinds of issue to be attended to in the design and evaluation of psychology-based pedagogy. Other work proposing comparable ideas is briefly reviewed, with particular attention to similarities and a key difference with the ideas of Oser

  17. On the relationships between generative encodings, regularity, and learning abilities when evolving plastic artificial neural networks.

    Directory of Open Access Journals (Sweden)

    Paul Tonelli

    Full Text Available A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1 the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2 synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT. Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1 in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2 whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities.

  18. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Integrating New Technologies and Existing Tools to Promote Programming Learning

    Directory of Open Access Journals (Sweden)

    Álvaro Santos

    2010-04-01

    Full Text Available In recent years, many tools have been proposed to reduce programming learning difficulties felt by many students. Our group has contributed to this effort through the development of several tools, such as VIP, SICAS, OOP-Anim, SICAS-COL and H-SICAS. Even though we had some positive results, the utilization of these tools doesn’t seem to significantly reduce weaker student’s difficulties. These students need stronger support to motivate them to get engaged in learning activities, inside and outside classroom. Nowadays, many technologies are available to create contexts that may help to accomplish this goal. We consider that a promising path goes through the integration of solutions. In this paper we analyze the features, strengths and weaknesses of the tools developed by our group. Based on these considerations we present a new environment, integrating different types of pedagogical approaches, resources, tools and technologies for programming learning support. With this environment, currently under development, it will be possible to review contents and lessons, based on video and screen captures. The support for collaborative tasks is another key point to improve and stimulate different models of teamwork. The platform will also allow the creation of various alternative models (learning objects for the same subject, enabling personalized learning paths adapted to each student knowledge level, needs and preferential learning styles. The learning sequences will work as a study organizer, following a suitable taxonomy, according to student’s cognitive skills. Although the main goal of this environment is to support students with more difficulties, it will provide a set of resources supporting the learning of more advanced topics. Software engineering techniques and representations, object orientation and event programming are features that will be available in order to promote the learning progress of students.

  20. Promoting learning transfer in post registration education: a collaborative approach.

    Science.gov (United States)

    Finn, Frances L; Fensom, Sue A; Chesser-Smyth, Patricia

    2010-01-01

    Pre-registration nurse education in Ireland became a four year undergraduate honors degree programme in 2002 (Government of Ireland, 2000. The Nursing Education Forum Report. Dublin, Dublin Stationary Office.). Consequently, the Irish Government invested significant resources in post registration nursing education in order to align certificate and diploma trained nurses with the qualification levels of new graduates. However, a general concern amongst academic and clinical staff in the South East of Ireland was that there was limited impact of this initiative on practice. These concerns were addressed through a collaborative approach to the development and implementation of a new part-time post registration degree that incorporated an enquiry and practice based learning philosophy. The principles of learning transfer (Ford, K., 1994. Defining transfer of learning the meaning is in the answers. Adult Learning 5 (4), p. 2214.) underpinned the curriculum development and implementation process with the goal of reducing the theory practice gap. This paper reports on all four stages of the curriculum development process: exploration, design, implementation and evaluation (Quinn, F.M., 2002. Principles and Practices of Nurse Education, fourth ed. Nelson Thornes, Cheltenham), and the subsequent impact of learning transfer on practice development. Eclectic approaches of quantitative and qualitative data collection techniques were utilised in the evaluation. The evaluation of this project to date supports our view that this practice based enquiry curriculum promotes the transfer of learning in the application of knowledge to practice, impacting both student and service development.

  1. Strategies influence neural activity for feedback learning across child and adolescent development.

    Science.gov (United States)

    Peters, Sabine; Koolschijn, P Cédric M P; Crone, Eveline A; Van Duijvenvoorde, Anna C K; Raijmakers, Maartje E J

    2014-09-01

    Learning from feedback is an important aspect of executive functioning that shows profound improvements during childhood and adolescence. This is accompanied by neural changes in the feedback-learning network, which includes pre-supplementary motor area (pre- SMA)/anterior cingulate cortex (ACC), dorsolateral prefrontal cortex (DLPFC), superior parietal cortex (SPC), and the basal ganglia. However, there can be considerable differences within age ranges in performance that are ascribed to differences in strategy use. This is problematic for traditional approaches of analyzing developmental data, in which age groups are assumed to be homogenous in strategy use. In this study, we used latent variable models to investigate if underlying strategy groups could be detected for a feedback-learning task and whether there were differences in neural activation patterns between strategies. In a sample of 268 participants between ages 8 to 25 years, we observed four underlying strategy groups, which were cut across age groups and varied in the optimality of executive functioning. These strategy groups also differed in neural activity during learning; especially the most optimal performing group showed more activity in DLPFC, SPC and pre-SMA/ACC compared to the other groups. However, age differences remained an important contributor to neural activation, even when correcting for strategy. These findings contribute to the debate of age versus performance predictors of neural development, and highlight the importance of studying individual differences in strategy use when studying development. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Hypothetical Pattern Recognition Design Using Multi-Layer Perceptorn Neural Network For Supervised Learning

    Directory of Open Access Journals (Sweden)

    Md. Abdullah-al-mamun

    2015-08-01

    Full Text Available Abstract Humans are capable to identifying diverse shape in the different pattern in the real world as effortless fashion due to their intelligence is grow since born with facing several learning process. Same way we can prepared an machine using human like brain called Artificial Neural Network that can be recognize different pattern from the real world object. Although the various techniques is exists to implementation the pattern recognition but recently the artificial neural network approaches have been giving the significant attention. Because the approached of artificial neural network is like a human brain that is learn from different observation and give a decision the previously learning rule. Over the 50 years research now a days pattern recognition for machine learning using artificial neural network got a significant achievement. For this reason many real world problem can be solve by modeling the pattern recognition process. The objective of this paper is to present the theoretical concept for pattern recognition design using Multi-Layer Perceptorn neural networkin the algorithm of artificial Intelligence as the best possible way of utilizing available resources to make a decision that can be a human like performance.

  3. Supervised neural network modeling: an empirical investigation into learning from imbalanced data with labeling errors.

    Science.gov (United States)

    Khoshgoftaar, Taghi M; Van Hulse, Jason; Napolitano, Amri

    2010-05-01

    Neural network algorithms such as multilayer perceptrons (MLPs) and radial basis function networks (RBFNets) have been used to construct learners which exhibit strong predictive performance. Two data related issues that can have a detrimental impact on supervised learning initiatives are class imbalance and labeling errors (or class noise). Imbalanced data can make it more difficult for the neural network learning algorithms to distinguish between examples of the various classes, and class noise can lead to the formulation of incorrect hypotheses. Both class imbalance and labeling errors are pervasive problems encountered in a wide variety of application domains. Many studies have been performed to investigate these problems in isolation, but few have focused on their combined effects. This study presents a comprehensive empirical investigation using neural network algorithms to learn from imbalanced data with labeling errors. In particular, the first component of our study investigates the impact of class noise and class imbalance on two common neural network learning algorithms, while the second component considers the ability of data sampling (which is commonly used to address the issue of class imbalance) to improve their performances. Our results, for which over two million models were trained and evaluated, show that conclusions drawn using the more commonly studied C4.5 classifier may not apply when using neural networks.

  4. Lifelong learning of human actions with deep neural network self-organization.

    Science.gov (United States)

    Parisi, German I; Tani, Jun; Weber, Cornelius; Wermter, Stefan

    2017-12-01

    Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  5. Minimal-Learning-Parameter Technique Based Adaptive Neural Sliding Mode Control of MEMS Gyroscope

    Directory of Open Access Journals (Sweden)

    Bin Xu

    2017-01-01

    Full Text Available This paper investigates an adaptive neural sliding mode controller for MEMS gyroscopes with minimal-learning-parameter technique. Considering the system uncertainty in dynamics, neural network is employed for approximation. Minimal-learning-parameter technique is constructed to decrease the number of update parameters, and in this way the computation burden is greatly reduced. Sliding mode control is designed to cancel the effect of time-varying disturbance. The closed-loop stability analysis is established via Lyapunov approach. Simulation results are presented to demonstrate the effectiveness of the method.

  6. Automated sleep stage detection with a classical and a neural learning algorithm--methodological aspects.

    Science.gov (United States)

    Schwaibold, M; Schöchlin, J; Bolz, A

    2002-01-01

    For classification tasks in biosignal processing, several strategies and algorithms can be used. Knowledge-based systems allow prior knowledge about the decision process to be integrated, both by the developer and by self-learning capabilities. For the classification stages in a sleep stage detection framework, three inference strategies were compared regarding their specific strengths: a classical signal processing approach, artificial neural networks and neuro-fuzzy systems. Methodological aspects were assessed to attain optimum performance and maximum transparency for the user. Due to their effective and robust learning behavior, artificial neural networks could be recommended for pattern recognition, while neuro-fuzzy systems performed best for the processing of contextual information.

  7. Promoting system-level learning from project-level lessons

    International Nuclear Information System (INIS)

    Jong, Amos A. de; Runhaar, Hens A.C.; Runhaar, Piety R.; Kolhoff, Arend J.; Driessen, Peter P.J.

    2012-01-01

    A growing number of low and middle income nations (LMCs) have adopted some sort of system for environmental impact assessment (EIA). However, generally many of these EIA systems are characterised by a low performance in terms of timely information dissemination, monitoring and enforcement after licencing. Donor actors (such as the World Bank) have attempted to contribute to a higher performance of EIA systems in LMCs by intervening at two levels: the project level (e.g. by providing scoping advice or EIS quality review) and the system level (e.g. by advising on EIA legislation or by capacity building). The aims of these interventions are environmental protection in concrete cases and enforcing the institutionalisation of environmental protection, respectively. Learning by actors involved is an important condition for realising these aims. A relatively underexplored form of learning concerns learning at EIA system-level via project level donor interventions. This ‘indirect’ learning potentially results in system changes that better fit the specific context(s) and hence contribute to higher performances. Our exploratory research in Ghana and the Maldives shows that thus far, ‘indirect’ learning only occurs incidentally and that donors play a modest role in promoting it. Barriers to indirect learning are related to the institutional context rather than to individual characteristics. Moreover, ‘indirect’ learning seems to flourish best in large projects where donors achieved a position of influence that they can use to evoke reflection upon system malfunctions. In order to enhance learning at all levels donors should thereby present the outcomes of the intervention elaborately (i.e. discuss the outcomes with a large audience), include practical suggestions about post-EIS activities such as monitoring procedures and enforcement options and stimulate the use of their advisory reports to generate organisational memory and ensure a better information

  8. Promoting system-level learning from project-level lessons

    Energy Technology Data Exchange (ETDEWEB)

    Jong, Amos A. de, E-mail: amosdejong@gmail.com [Innovation Management, Utrecht (Netherlands); Runhaar, Hens A.C., E-mail: h.a.c.runhaar@uu.nl [Section of Environmental Governance, Utrecht University, Utrecht (Netherlands); Runhaar, Piety R., E-mail: piety.runhaar@wur.nl [Organisational Psychology and Human Resource Development, University of Twente, Enschede (Netherlands); Kolhoff, Arend J., E-mail: Akolhoff@eia.nl [The Netherlands Commission for Environmental Assessment, Utrecht (Netherlands); Driessen, Peter P.J., E-mail: p.driessen@geo.uu.nl [Department of Innovation and Environment Sciences, Utrecht University, Utrecht (Netherlands)

    2012-02-15

    A growing number of low and middle income nations (LMCs) have adopted some sort of system for environmental impact assessment (EIA). However, generally many of these EIA systems are characterised by a low performance in terms of timely information dissemination, monitoring and enforcement after licencing. Donor actors (such as the World Bank) have attempted to contribute to a higher performance of EIA systems in LMCs by intervening at two levels: the project level (e.g. by providing scoping advice or EIS quality review) and the system level (e.g. by advising on EIA legislation or by capacity building). The aims of these interventions are environmental protection in concrete cases and enforcing the institutionalisation of environmental protection, respectively. Learning by actors involved is an important condition for realising these aims. A relatively underexplored form of learning concerns learning at EIA system-level via project level donor interventions. This 'indirect' learning potentially results in system changes that better fit the specific context(s) and hence contribute to higher performances. Our exploratory research in Ghana and the Maldives shows that thus far, 'indirect' learning only occurs incidentally and that donors play a modest role in promoting it. Barriers to indirect learning are related to the institutional context rather than to individual characteristics. Moreover, 'indirect' learning seems to flourish best in large projects where donors achieved a position of influence that they can use to evoke reflection upon system malfunctions. In order to enhance learning at all levels donors should thereby present the outcomes of the intervention elaborately (i.e. discuss the outcomes with a large audience), include practical suggestions about post-EIS activities such as monitoring procedures and enforcement options and stimulate the use of their advisory reports to generate organisational memory and ensure a better

  9. A new avenue to the synthesis of GAG-mimicking polymers highly promoting neural differentiation of embryonic stem cells.

    Science.gov (United States)

    Wang, Mengmeng; Lyu, Zhonglin; Chen, Gaojian; Wang, Hongwei; Yuan, Yuqi; Ding, Kaiguo; Yu, Qian; Yuan, Lin; Chen, Hong

    2015-10-28

    A new strategy for the fabrication of glycosaminoglycan (GAG) analogs was proposed by copolymerizing the sulfonated unit and the glyco unit, 'splitted' from the sulfated saccharide building blocks of GAGs. The synthetic polymers can promote cell proliferation and neural differentiation of embryonic stem cells with the effects even better than those of heparin.

  10. Motor sequence learning-induced neural efficiency in functional brain connectivity.

    Science.gov (United States)

    Karim, Helmet T; Huppert, Theodore J; Erickson, Kirk I; Wollam, Mariegold E; Sparto, Patrick J; Sejdić, Ervin; VanSwearingen, Jessie M

    2017-02-15

    Previous studies have shown the functional neural circuitry differences before and after an explicitly learned motor sequence task, but have not assessed these changes during the process of motor skill learning. Functional magnetic resonance imaging activity was measured while participants (n=13) were asked to tap their fingers to visually presented sequences in blocks that were either the same sequence repeated (learning block) or random sequences (control block). Motor learning was associated with a decrease in brain activity during learning compared to control. Lower brain activation was noted in the posterior parietal association area and bilateral thalamus during the later periods of learning (not during the control). Compared to the control condition, we found the task-related motor learning was associated with decreased connectivity between the putamen and left inferior frontal gyrus and left middle cingulate brain regions. Motor learning was associated with changes in network activity, spatial extent, and connectivity. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Promotion of self-regulated learning in classrooms : investigating frequency, quality, and consequences for student performance

    NARCIS (Netherlands)

    Kistner, Saskia; Rakoczy, Katrin; Otto, Barbara; Dignath -van Ewijk, Charlotte; Buettner, Gerhard; Klieme, Eckhard

    An implication of the current research on self-regulation is to implement the promotion of self-regulated learning in schools. Teachers can promote self-regulated learning either directly by teaching learning strategies or indirectly by arranging a learning environment that enables students to

  12. Employing Wikibook Project in a Linguistics Course to Promote Peer Teaching and Learning

    Science.gov (United States)

    Wang, Lixun

    2016-01-01

    Peer teaching and learning are learner-centred approaches with great potential for promoting effective learning, and the fast development of Web 2.0 technology has opened new doors for promoting peer teaching and learning. In this study, we aim to establish peer teaching and learning among students by employing a Wikibook project in the course…

  13. Biphasic electrical currents stimulation promotes both proliferation and differentiation of fetal neural stem cells.

    Directory of Open Access Journals (Sweden)

    Keun-A Chang

    2011-04-01

    Full Text Available The use of non-chemical methods to differentiate stem cells has attracted researchers from multiple disciplines, including the engineering and the biomedical fields. No doubt, growth factor based methods are still the most dominant of achieving some level of proliferation and differentiation control--however, chemical based methods are still limited by the quality, source, and amount of the utilized reagents. Well-defined non-chemical methods to differentiate stem cells allow stem cell scientists to control stem cell biology by precisely administering the pre-defined parameters, whether they are structural cues, substrate stiffness, or in the form of current flow. We have developed a culture system that allows normal stem cell growth and the option of applying continuous and defined levels of electric current to alter the cell biology of growing cells. This biphasic current stimulator chip employing ITO electrodes generates both positive and negative currents in the same culture chamber without affecting surface chemistry. We found that biphasic electrical currents (BECs significantly increased the proliferation of fetal neural stem cells (NSCs. Furthermore, BECs also promoted the differentiation of fetal NSCs into neuronal cells, as assessed using immunocytochemistry. Our results clearly show that BECs promote both the proliferation and neuronal differentiation of fetal NSCs. It may apply to the development of strategies that employ NSCs in the treatment of various neurodegenerative diseases, such as Alzheimer's and Parkinson's diseases.

  14. Fibronectin promotes differentiation of neural crest progenitors endowed with smooth muscle cell potential

    International Nuclear Information System (INIS)

    Costa-Silva, Bruno; Coelho da Costa, Meline; Melo, Fernanda Rosene; Neves, Cynara Mendes; Alvarez-Silva, Marcio; Calloni, Giordano Wosgrau; Trentin, Andrea Goncalves

    2009-01-01

    The neural crest (NC) is a model system used to investigate multipotency during vertebrate development. Environmental factors control NC cell fate decisions. Despite the well-known influence of extracellular matrix molecules in NC cell migration, the issue of whether they also influence NC cell differentiation has not been addressed at the single cell level. By analyzing mass and clonal cultures of mouse cephalic and quail trunk NC cells, we show for the first time that fibronectin (FN) promotes differentiation into the smooth muscle cell phenotype without affecting differentiation into glia, neurons, and melanocytes. Time course analysis indicated that the FN-induced effect was not related to massive cell death or proliferation of smooth muscle cells. Finally, by comparing clonal cultures of quail trunk NC cells grown on FN and collagen type IV (CLIV), we found that FN strongly increased both NC cell survival and the proportion of unipotent and oligopotent NC progenitors endowed with smooth muscle potential. In contrast, melanocytic progenitors were prominent in clonogenic NC cells grown on CLIV. Taken together, these results show that FN promotes NC cell differentiation along the smooth muscle lineage, and therefore plays an important role in fate decisions of NC progenitor cells

  15. DGCR8 Promotes Neural Progenitor Expansion and Represses Neurogenesis in the Mouse Embryonic Neocortex

    Directory of Open Access Journals (Sweden)

    Nadin Hoffmann

    2018-04-01

    Full Text Available DGCR8 and DROSHA are the minimal functional core of the Microprocessor complex essential for biogenesis of canonical microRNAs and for the processing of other RNAs. Conditional deletion of Dgcr8 and Drosha in the murine telencephalon indicated that these proteins exert crucial functions in corticogenesis. The identification of mechanisms of DGCR8- or DROSHA-dependent regulation of gene expression in conditional knockout mice are often complicated by massive apoptosis. Here, to investigate DGCR8 functions on amplification/differentiation of neural progenitors cells (NPCs in corticogenesis, we overexpress Dgcr8 in the mouse telencephalon, by in utero electroporation (IUEp. We find that DGCR8 promotes the expansion of NPC pools and represses neurogenesis, in absence of apoptosis, thus overcoming the usual limitations of Dgcr8 knockout-based approach. Interestingly, DGCR8 selectively promotes basal progenitor amplification at later developmental stages, entailing intriguing implications for neocortical expansion in evolution. Finally, despite a 3- to 5-fold increase of DGCR8 level in the mouse telencephalon, the composition, target preference and function of the DROSHA-dependent Microprocessor complex remain unaltered. Thus, we propose that DGCR8-dependent modulation of gene expression in corticogenesis is more complex than previously known, and possibly DROSHA-independent.

  16. Neural stem cells promote nerve regeneration through IL12-induced Schwann cell differentiation.

    Science.gov (United States)

    Lee, Don-Ching; Chen, Jong-Hang; Hsu, Tai-Yu; Chang, Li-Hsun; Chang, Hsu; Chi, Ya-Hui; Chiu, Ing-Ming

    2017-03-01

    Regeneration of injured peripheral nerves is a slow, complicated process that could be improved by implantation of neural stem cells (NSCs) or nerve conduit. Implantation of NSCs along with conduits promotes the regeneration of damaged nerve, likely because (i) conduit supports and guides axonal growth from one nerve stump to the other, while preventing fibrous tissue ingrowth and retaining neurotrophic factors; and (ii) implanted NSCs differentiate into Schwann cells and maintain a growth factor enriched microenvironment, which promotes nerve regeneration. In this study, we identified IL12p80 (homodimer of IL12p40) in the cell extracts of implanted nerve conduit combined with NSCs by using protein antibody array and Western blotting. Levels of IL12p80 in these conduits are 1.6-fold higher than those in conduits without NSCs. In the sciatic nerve injury mouse model, implantation of NSCs combined with nerve conduit and IL12p80 improves motor recovery and increases the diameter up to 4.5-fold, at the medial site of the regenerated nerve. In vitro study further revealed that IL12p80 stimulates the Schwann cell differentiation of mouse NSCs through the phosphorylation of signal transducer and activator of transcription 3 (Stat3). These results suggest that IL12p80 can trigger Schwann cell differentiation of mouse NSCs through Stat3 phosphorylation and enhance the functional recovery and the diameter of regenerated nerves in a mouse sciatic nerve injury model. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Application of artificial neural network with extreme learning machine for economic growth estimation

    Science.gov (United States)

    Milačić, Ljubiša; Jović, Srđan; Vujović, Tanja; Miljković, Jovica

    2017-01-01

    The purpose of this research is to develop and apply the artificial neural network (ANN) with extreme learning machine (ELM) to forecast gross domestic product (GDP) growth rate. The economic growth forecasting was analyzed based on agriculture, manufacturing, industry and services value added in GDP. The results were compared with ANN with back propagation (BP) learning approach since BP could be considered as conventional learning methodology. The reliability of the computational models was accessed based on simulation results and using several statistical indicators. Based on results, it was shown that ANN with ELM learning methodology can be applied effectively in applications of GDP forecasting.

  18. PROMOTING INCIDENTAL VOCABULARY LEARNING THROUGH VERBAL DRAMATIZATION OF WORDS

    Directory of Open Access Journals (Sweden)

    Looi-Chin Ch’ng

    2014-12-01

    Full Text Available Despite the fact that explicit teaching of vocabulary is often practised in English as a Second Language (ESL classrooms, it has been proven to be rather ineffective, largely because words are not taught in context. This has prompted the increasing use of incidental vocabulary learning approach, which emphasises on repeated readings as a source for vocabulary learning. By adopting this approach, this study aims to investigate students’ ability in learning vocabulary incidentally via verbal dramatization of written texts. In this case, readers’ theatre (RT is used as a way to allow learners to engage in active reading so as to promote vocabulary learning. A total of 160 diploma students participated in this case study and they were divided equally into two groups, namely classroom reading (CR and RT groups. A proficiency test was first conducted to determine their vocabulary levels. Based on the test results, a story was selected as the reading material in the two groups. The CR group read the story through a normal reading lesson in class while the RT group was required to verbally dramatize the text through readers’ theatre activity. Then, a post-test based on vocabulary levels was carried out and the results were compared. The findings revealed that incidental learning was more apparent in the RT group and their ability to learn words from the higher levels was noticeable through higher accuracy scores. Although not conclusive, this study has demonstrated the potential of using readers’ theatre as a form of incidental vocabulary learning activity in ESL settings.

  19. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    Science.gov (United States)

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Forecasting crude oil price with an EMD-based neural network ensemble learning paradigm

    International Nuclear Information System (INIS)

    Yu, Lean; Wang, Shouyang; Lai, Kin Keung

    2008-01-01

    In this study, an empirical mode decomposition (EMD) based neural network ensemble learning paradigm is proposed for world crude oil spot price forecasting. For this purpose, the original crude oil spot price series were first decomposed into a finite, and often small, number of intrinsic mode functions (IMFs). Then a three-layer feed-forward neural network (FNN) model was used to model each of the extracted IMFs, so that the tendencies of these IMFs could be accurately predicted. Finally, the prediction results of all IMFs are combined with an adaptive linear neural network (ALNN), to formulate an ensemble output for the original crude oil price series. For verification and testing, two main crude oil price series, West Texas Intermediate (WTI) crude oil spot price and Brent crude oil spot price, are used to test the effectiveness of the proposed EMD-based neural network ensemble learning methodology. Empirical results obtained demonstrate attractiveness of the proposed EMD-based neural network ensemble learning paradigm. (author)

  1. Promotion of self-directed learning using virtual patient cases.

    Science.gov (United States)

    Benedict, Neal; Schonder, Kristine; McGee, James

    2013-09-12

    To assess the effectiveness of virtual patient cases to promote self-directed learning (SDL) in a required advanced therapeutics course. Virtual patient software based on a branched-narrative decision-making model was used to create complex patient case simulations to replace lecture-based instruction. Within each simulation, students used SDL principles to learn course objectives, apply their knowledge through clinical recommendations, and assess their progress through patient outcomes and faculty feedback linked to their individual decisions. Group discussions followed each virtual patient case to provide further interpretation, clarification, and clinical perspective. Students found the simulated patient cases to be organized (90%), enjoyable (82%), intellectually challenging (97%), and valuable to their understanding of course content (91%). Students further indicated that completion of the virtual patient cases prior to class permitted better use of class time (78%) and promoted SDL (84%). When assessment questions regarding material on postoperative nausea and vomiting were compared, no difference in scores were found between the students who attended the lecture on the material in 2011 (control group) and those who completed the virtual patient case on the material in 2012 (intervention group). Completion of virtual patient cases, designed to replace lectures and promote SDL, was overwhelmingly supported by students and proved to be as effective as traditional teaching methods.

  2. A neural learning classifier system with self-adaptive constructivism for mobile robot control.

    Science.gov (United States)

    Hurst, Jacob; Bull, Larry

    2006-01-01

    For artificial entities to achieve true autonomy and display complex lifelike behavior, they will need to exploit appropriate adaptable learning algorithms. In this context adaptability implies flexibility guided by the environment at any given time and an open-ended ability to learn appropriate behaviors. This article examines the use of constructivism-inspired mechanisms within a neural learning classifier system architecture that exploits parameter self-adaptation as an approach to realize such behavior. The system uses a rule structure in which each rule is represented by an artificial neural network. It is shown that appropriate internal rule complexity emerges during learning at a rate controlled by the learner and that the structure indicates underlying features of the task. Results are presented in simulated mazes before moving to a mobile robot platform.

  3. Continual and One-Shot Learning Through Neural Networks with Dynamic External Memory

    DEFF Research Database (Denmark)

    Lüders, Benno; Schläger, Mikkel; Korach, Aleksandra

    2017-01-01

    it easier to find unused memory location and therefor facilitates the evolution of continual learning networks. Our results suggest that augmenting evolving networks with an external memory component is not only a viable mechanism for adaptive behaviors in neuroevolution but also allows these networks...... a new task is learned. This paper takes a step in overcoming this limitation by building on the recently proposed Evolving Neural Turing Machine (ENTM) approach. In the ENTM, neural networks are augmented with an external memory component that they can write to and read from, which allows them to store...... associations quickly and over long periods of time. The results in this paper demonstrate that the ENTM is able to perform one-shot learning in reinforcement learning tasks without catastrophic forgetting of previously stored associations. Additionally, we introduce a new ENTM default jump mechanism that makes...

  4. Chaos Synchronization Using Adaptive Dynamic Neural Network Controller with Variable Learning Rates

    Directory of Open Access Journals (Sweden)

    Chih-Hong Kao

    2011-01-01

    Full Text Available This paper addresses the synchronization of chaotic gyros with unknown parameters and external disturbance via an adaptive dynamic neural network control (ADNNC system. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic RBF (DRBF network to online approximate an ideal controller. The DRBF network can create new hidden neurons online if the input data falls outside the hidden layer and prune the insignificant hidden neurons online if the hidden neuron is inappropriate. The smooth compensator is designed to compensate for the approximation error between the neural controller and the ideal controller. Moreover, the variable learning rates of the parameter adaptation laws are derived based on a discrete-type Lyapunov function to speed up the convergence rate of the tracking error. Finally, the simulation results which verified the chaotic behavior of two nonlinear identical chaotic gyros can be synchronized using the proposed ADNNC scheme.

  5. The structure of observed learning outcome (SOLO) taxonomy: a model to promote dental students' learning.

    Science.gov (United States)

    Lucander, H; Bondemark, L; Brown, G; Knutsson, K

    2010-08-01

    Selective memorising of isolated facts or reproducing what is thought to be required - the surface approach to learning - is not the desired outcome for a dental student or a dentist in practice. The preferred outcome is a deep approach as defined by an intention to seek understanding, develop expertise and relate information and knowledge into a coherent whole. The aim of this study was to investigate whether the structure of observed learning outcome (SOLO) taxonomy could be used as a model to assist and promote the dental students to develop a deep approach to learning assessed as learning outcomes in a summative assessment. Thirty-two students, participating in course eight in 2007 at the Faculty of Odontology at Malmö University, were introduced to the SOLO taxonomy and constituted the test group. The control group consisted of 35 students participating in course eight in 2006. The effect of the introduction was measured by evaluating responses to a question in the summative assessment by using the SOLO taxonomy. The evaluators consisted of two teachers who performed the assessment of learning outcomes independently and separately on the coded material. The SOLO taxonomy as a model for learning was found to improve the quality of learning. Compared to the control group significantly more strings and structured relations between these strings were present in the test group after the SOLO taxonomy had been introduced (P SOLO taxonomy is recommended as a model for promoting and developing a deeper approach to learning in dentistry.

  6. Neural Pattern Similarity in the Left IFG and Fusiform Is Associated with Novel Word Learning

    Science.gov (United States)

    Qu, Jing; Qian, Liu; Chen, Chuansheng; Xue, Gui; Li, Huiling; Xie, Peng; Mei, Leilei

    2017-01-01

    Previous studies have revealed that greater neural pattern similarity across repetitions is associated with better subsequent memory. In this study, we used an artificial language training paradigm and representational similarity analysis to examine whether neural pattern similarity across repetitions before training was associated with post-training behavioral performance. Twenty-four native Chinese speakers were trained to learn a logographic artificial language for 12 days and behavioral performance was recorded using the word naming and picture naming tasks. Participants were scanned while performing a passive viewing task before training, after 4-day training and after 12-day training. Results showed that pattern similarity in the left pars opercularis (PO) and fusiform gyrus (FG) before training was negatively associated with reaction time (RT) in both word naming and picture naming tasks after training. These results suggest that neural pattern similarity is an effective neurofunctional predictor of novel word learning in addition to word memory. PMID:28878640

  7. Neural Pattern Similarity in the Left IFG and Fusiform Is Associated with Novel Word Learning

    Directory of Open Access Journals (Sweden)

    Jing Qu

    2017-08-01

    Full Text Available Previous studies have revealed that greater neural pattern similarity across repetitions is associated with better subsequent memory. In this study, we used an artificial language training paradigm and representational similarity analysis to examine whether neural pattern similarity across repetitions before training was associated with post-training behavioral performance. Twenty-four native Chinese speakers were trained to learn a logographic artificial language for 12 days and behavioral performance was recorded using the word naming and picture naming tasks. Participants were scanned while performing a passive viewing task before training, after 4-day training and after 12-day training. Results showed that pattern similarity in the left pars opercularis (PO and fusiform gyrus (FG before training was negatively associated with reaction time (RT in both word naming and picture naming tasks after training. These results suggest that neural pattern similarity is an effective neurofunctional predictor of novel word learning in addition to word memory.

  8. LEARNING ALGORITHM EFFECT ON MULTILAYER FEED FORWARD ARTIFICIAL NEURAL NETWORK PERFORMANCE IN IMAGE CODING

    Directory of Open Access Journals (Sweden)

    OMER MAHMOUD

    2007-08-01

    Full Text Available One of the essential factors that affect the performance of Artificial Neural Networks is the learning algorithm. The performance of Multilayer Feed Forward Artificial Neural Network performance in image compression using different learning algorithms is examined in this paper. Based on Gradient Descent, Conjugate Gradient, Quasi-Newton techniques three different error back propagation algorithms have been developed for use in training two types of neural networks, a single hidden layer network and three hidden layers network. The essence of this study is to investigate the most efficient and effective training methods for use in image compression and its subsequent applications. The obtained results show that the Quasi-Newton based algorithm has better performance as compared to the other two algorithms.

  9. Incremental learning of perceptual and conceptual representations and the puzzle of neural repetition suppression.

    Science.gov (United States)

    Gotts, Stephen J

    2016-08-01

    Incremental learning models of long-term perceptual and conceptual knowledge hold that neural representations are gradually acquired over many individual experiences via Hebbian-like activity-dependent synaptic plasticity across cortical connections of the brain. In such models, variation in task relevance of information, anatomic constraints, and the statistics of sensory inputs and motor outputs lead to qualitative alterations in the nature of representations that are acquired. Here, the proposal that behavioral repetition priming and neural repetition suppression effects are empirical markers of incremental learning in the cortex is discussed, and research results that both support and challenge this position are reviewed. Discussion is focused on a recent fMRI-adaptation study from our laboratory that shows decoupling of experience-dependent changes in neural tuning, priming, and repetition suppression, with representational changes that appear to work counter to the explicit task demands. Finally, critical experiments that may help to clarify and resolve current challenges are outlined.

  10. Deciphering the Role of Sulfonated Unit in Heparin-Mimicking Polymer to Promote Neural Differentiation of Embryonic Stem Cells.

    Science.gov (United States)

    Lei, Jiehua; Yuan, Yuqi; Lyu, Zhonglin; Wang, Mengmeng; Liu, Qi; Wang, Hongwei; Yuan, Lin; Chen, Hong

    2017-08-30

    Glycosaminoglycans (GAGs), especially heparin and heparan sulfate (HS), hold great potential for inducing the neural differentiation of embryonic stem cells (ESCs) and have brought new hope for the treatment of neurological diseases. However, the disadvantages of natural heparin/HS, such as difficulty in isolating them with a sufficient amount, highly heterogeneous structure, and the risk of immune responses, have limited their further therapeutic applications. Thus, there is a great demand for stable, controllable, and well-defined synthetic alternatives of heparin/HS with more effective biological functions. In this study, based upon a previously proposed unit-recombination strategy, several heparin-mimicking polymers were synthesized by integrating glucosamine-like 2-methacrylamido glucopyranose monomers (MAG) with three sulfonated units in different structural forms, and their effects on cell proliferation, the pluripotency, and the differentiation of ESCs were carefully studied. The results showed that all the copolymers had good cytocompatibility and displayed much better bioactivity in promoting the neural differentiation of ESCs as compared to natural heparin; copolymers with different sulfonated units exhibited different levels of promoting ability; among them, copolymer with 3-sulfopropyl acrylate (SPA) as a sulfonated unit was the most potent in promoting the neural differentiation of ESCs; the promoting effect is dependent on the molecular weight and concentration of P(MAG-co-SPA), with the highest levels occurring at the intermediate molecular weight and concentration. These results clearly demonstrated that the sulfonated unit in the copolymers played an important role in determining the promoting effect on ESCs' neural differentiation; SPA was identified as the most potent sulfonated unit for copolymer with the strongest promoting ability. The possible reason for sulfonated unit structure as a vital factor influencing the ability of the copolymers

  11. Ventral Tegmental Area and Substantia Nigra Neural Correlates of Spatial Learning

    Science.gov (United States)

    Martig, Adria K.; Mizumori, Sheri J. Y.

    2011-01-01

    The ventral tegmental area (VTA) and substantia nigra pars compacta (SNc) may provide modulatory signals that, respectively, influence hippocampal (HPC)- and striatal-dependent memory. Electrophysiological studies investigating neural correlates of learning and memory of dopamine (DA) neurons during classical conditioning tasks have found DA…

  12. A Closer Look at Deep Learning Neural Networks with Low-level Spectral Periodicity Features

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Kereliuk, Corey; Pikrakis, Aggelos

    2014-01-01

    Systems built using deep learning neural networks trained on low-level spectral periodicity features (DeSPerF) reproduced the most “ground truth” of the systems submitted to the MIREX 2013 task, “Audio Latin Genre Classification.” To answer why this was the case, we take a closer look...

  13. Identifying beneficial task relations for multi-task learning in deep neural networks

    DEFF Research Database (Denmark)

    Bingel, Joachim; Søgaard, Anders

    2017-01-01

    Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP...

  14. Hyperresponsiveness of the Neural Fear Network During Fear Conditioning and Extinction Learning in Male Cocaine Users

    NARCIS (Netherlands)

    Kaag, A.M.; Levar, N.; Woutersen, K.; Homberg, J.R.; Brink, W. van den; Reneman, L.; Wingen, G. van

    2016-01-01

    OBJECTIVE: The authors investigated whether cocaine use disorder is associated with abnormalities in the neural underpinnings of aversive conditioning and extinction learning, as these processes may play an important role in the development and persistence of drug abuse. METHOD: Forty male regular

  15. Learning behavior and temporary minima of two-layer neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the occurrence of temporary minima during training of a single-output, two-layer neural network, with learning according to the back-propagation algorithm. A new vector decomposition method is introduced, which simplifies the mathematical analysis of

  16. The interchangeability of learning rate and gain in backpropagation neural networks

    NARCIS (Netherlands)

    Thimm, G.; Moerland, P.; Fiesler, E.

    1996-01-01

    The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights.

  17. The neural coding of feedback learning across child and adolescent development

    NARCIS (Netherlands)

    Peters, S.; Braams, B.R.; Raijmakers, M.E.J.; Koolschijn, P.C.M.P.; Crone, E.A.

    2014-01-01

    The ability to learn from environmental cues is an important contributor to successful performance in a variety of settings, including school. Despite the progress in unraveling the neural correlates of cognitive control in childhood and adolescence, relatively little is known about how these brain

  18. Learning Errors by Radial Basis Function Neural Networks and Regularization Networks

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman; Vidnerová, Petra

    2009-01-01

    Roč. 1, č. 2 (2009), s. 49-57 ISSN 2005-4262 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : neural network * RBF networks * regularization * learning Subject RIV: IN - Informatics, Computer Science http://www.sersc.org/journals/IJGDC/vol2_no1/5.pdf

  19. Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them

    Science.gov (United States)

    Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.

    2011-01-01

    Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…

  20. Consensus-based distributed cooperative learning from closed-loop neural control systems.

    Science.gov (United States)

    Chen, Weisheng; Hua, Shaoyong; Zhang, Huaguang

    2015-02-01

    In this paper, the neural tracking problem is addressed for a group of uncertain nonlinear systems where the system structures are identical but the reference signals are different. This paper focuses on studying the learning capability of neural networks (NNs) during the control process. First, we propose a novel control scheme called distributed cooperative learning (DCL) control scheme, by establishing the communication topology among adaptive laws of NN weights to share their learned knowledge online. It is further proved that if the communication topology is undirected and connected, all estimated weights of NNs can converge to small neighborhoods around their optimal values over a domain consisting of the union of all state orbits. Second, as a corollary it is shown that the conclusion on the deterministic learning still holds in the decentralized adaptive neural control scheme where, however, the estimated weights of NNs just converge to small neighborhoods of the optimal values along their own state orbits. Thus, the learned controllers obtained by DCL scheme have the better generalization capability than ones obtained by decentralized learning method. A simulation example is provided to verify the effectiveness and advantages of the control schemes proposed in this paper.

  1. Does Curriculum 2005 promote successful learning of elementary algebra?

    Directory of Open Access Journals (Sweden)

    Nelis Vermeulen

    2007-10-01

    Full Text Available This article reviews literature, previous to the development of Curriculum 2005, describing possible causes and solutions for learners’ poor performance in algebra. It then analyses the Revised National Curriculum Statement for Mathematics in an attempt to determine whether it addresses these causes and suggested solutions. This analysis finds that the curriculum to a large extent does address them, but that some are either not addressed, or addressed only implicitly. Consequently, Curriculum 2005 may only partly promote successful learning of elementary algebra.

  2. A Professionalism Curricular Model to Promote Transformative Learning Among Residents.

    Science.gov (United States)

    Foshee, Cecile M; Mehdi, Ali; Bierer, S Beth; Traboulsi, Elias I; Isaacson, J Harry; Spencer, Abby; Calabrese, Cassandra; Burkey, Brian B

    2017-06-01

    Using the frameworks of transformational learning and situated learning theory, we developed a technology-enhanced professionalism curricular model to build a learning community aimed at promoting residents' self-reflection and self-awareness. The RAPR model had 4 components: (1) R ecognize : elicit awareness; (2) A ppreciate : question assumptions and take multiple perspectives; (3) P ractice : try new/changed perspectives; and (4) R eflect : articulate implications of transformed views on future actions. The authors explored the acceptability and practicality of the RAPR model in teaching professionalism in a residency setting, including how residents and faculty perceive the model, how well residents carry out the curricular activities, and whether these activities support transformational learning. A convenience sample of 52 postgraduate years 1 through 3 internal medicine residents participated in the 10-hour curriculum over 4 weeks. A constructivist approach guided the thematic analysis of residents' written reflections, which were a required curricular task. A total of 94% (49 of 52) of residents participated in 2 implementation periods (January and March 2015). Findings suggested that RAPR has the potential to foster professionalism transformation in 3 domains: (1) attitudinal, with participants reporting they viewed professionalism in a more positive light and felt more empathetic toward patients; (2) behavioral, with residents indicating their ability to listen to patients increased; and (3) cognitive, with residents indicating the discussions improved their ability to reflect, and this helped them create meaning from experiences. Our findings suggest that RAPR offers an acceptable and practical strategy to teach professionalism to residents.

  3. DAILY RUNNING PROMOTES SPATIAL LEARNING AND MEMORY IN RATS

    Directory of Open Access Journals (Sweden)

    HojjatAllah Alaei

    2007-12-01

    Full Text Available Previous studies have shown that physical activity improves learning and memory. Present study was performed to determine the effects of acute, chronic and continuous exercise with different periods on spatial learning and memory recorded as the latency and length of swim path in the Morris water maze testing in subsequent 8 days. Four rat groups were included as follows: 1- Group C (controls which did not exercise. 2- Group A (30 days treadmill running before and 8 days during the Morris water maze testing period. 3- Group B (30 days exercise before the Morris water maze testing period only and 4- Group D (8 days exercise only during the Morris water maze testing period. The results showed that chronic (30 days and continuous (during 8 days of Morris water maze testing days treadmill training produced a significant enhancement in spatial learning and memory which was indicated by decreases in path length and latency to reach the platform in the Morris water maze test (p < 0.05. The benefits in these tests were lost in three days, if the daily running session was abandoned. In group D with acute treadmill running (8 days exercise only the difference between the Group A disappeared in one week and benefit seemed to be obtained in comparison with the controls without running program. In conclusion the chronic and daily running exercises promoted learning and memory in Morris water maze, but the benefits were lost in few days without daily running sessions in adult rats

  4. A personal connection: Promoting positive attitudes towards teaching and learning.

    Science.gov (United States)

    Lujan, Heidi L; DiCarlo, Stephen E

    2017-09-01

    Students' attitudes towards teaching and learning must be addressed with the same seriousness and effort as we address content. Establishing a personal connection and addressing our students' basic psychological needs will produce positive attitudes towards teaching and learning and develop life-long learners. It will also promote constructive student-teacher relationships that have a profound influence on our students' approach towards school. To begin this process, consider the major tenets of the Self-Determination Theory. The Self-Determination Theory of human motivation focuses on our students' innate psychological needs and the degree to which an individual's behavior is self-motivated and self-determined. Faculty can satisfy the innate psychological needs by addressing our students' desire for relatedness, competence and autonomy. Relatedness refers to our students' need to feel connected to others, to be a member of a group, to have a sense of communion and to develop close relationships with others. Competence is believing our students can succeed , challenging them to do so and imparting that belief in them. Autonomy involves considering the perspectives of the student and providing relevant information and opportunities for student choice and initiating and regulating their own behaviors. Establishing a personal connection and addressing our students' basic psychological needs will improve our teaching, inspire and engage our students and promote positive attitudes towards teaching and learning while reducing competition and increasing compassion. These are important goals because unless students are inspired and motivated and have positive attitudes towards teaching and learning our efforts will fail to meet their full potential. Anat Sci Educ 10: 503-507. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  5. Learning Microbiology Through Cooperation: Designing Cooperative Learning Activities that Promote Interdependence, Interaction, and Accountability

    Directory of Open Access Journals (Sweden)

    Janine E. Trempy

    2009-12-01

    Full Text Available A microbiology course and its corresponding learning activities have been structured according to the Cooperative Learning Model. This course, The World According to Microbes, integrates science, math, engineering, and technology (SMET majors and non-SMET majors into teams of students charged with problem solving activities that are microbial in origin. In this study we describe development of learning activities that utilize key components of Cooperative Learning—positive interdependence, promotive interaction, individual accountability, teamwork skills, and group processing. Assessments and evaluations over an 8-year period demonstrate high retention of key concepts in microbiology and high student satisfaction with the course.

  6. Vicarious Neural Processing of Outcomes during Observational Learning

    NARCIS (Netherlands)

    Monfardini, Elisabetta; Gazzola, Valeria; Boussaoud, Driss; Brovelli, Andrea; Keysers, Christian; Wicker, Bruno

    2013-01-01

    Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on

  7. Learning Orthographic Structure with Sequential Generative Neural Networks

    Science.gov (United States)

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-01-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in…

  8. A Newton-type neural network learning algorithm

    International Nuclear Information System (INIS)

    Ivanov, V.V.; Puzynin, I.V.; Purehvdorzh, B.

    1993-01-01

    First- and second-order learning methods for feed-forward multilayer networks are considered. A Newton-type algorithm is proposed and compared with the common back-propagation algorithm. It is shown that the proposed algorithm provides better learning quality. Some recommendations for their usage are given. 11 refs.; 1 fig.; 1 tab

  9. Time to rethink the neural mechanisms of learning and memory.

    Science.gov (United States)

    Gallistel, Charles R; Balsam, Peter D

    2014-02-01

    Most studies in the neurobiology of learning assume that the underlying learning process is a pairing - dependent change in synaptic strength that requires repeated experience of events presented in close temporal contiguity. However, much learning is rapid and does not depend on temporal contiguity, which has never been precisely defined. These points are well illustrated by studies showing that the temporal relations between events are rapidly learned- even over long delays- and that this knowledge governs the form and timing of behavior. The speed with which anticipatory responses emerge in conditioning paradigms is determined by the information that cues provide about the timing of rewards. The challenge for understanding the neurobiology of learning is to understand the mechanisms in the nervous system that encode information from even a single experience, the nature of the memory mechanisms that can encode quantities such as time, and how the brain can flexibly perform computations based on this information. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Parallelization of learning problems by artificial neural networks. Application in external radiotherapy

    International Nuclear Information System (INIS)

    Sauget, M.

    2007-12-01

    This research is about the application of neural networks used in the external radiotherapy domain. The goal is to elaborate a new evaluating system for the radiation dose distributions in heterogeneous environments. The al objective of this work is to build a complete tool kit to evaluate the optimal treatment planning. My st research point is about the conception of an incremental learning algorithm. The interest of my work is to combine different optimizations specialized in the function interpolation and to propose a new algorithm allowing to change the neural network architecture during the learning phase. This algorithm allows to minimise the al size of the neural network while keeping a good accuracy. The second part of my research is to parallelize the previous incremental learning algorithm. The goal of that work is to increase the speed of the learning step as well as the size of the learned dataset needed in a clinical case. For that, our incremental learning algorithm presents an original data decomposition with overlapping, together with a fault tolerance mechanism. My last research point is about a fast and accurate algorithm computing the radiation dose deposit in any heterogeneous environment. At the present time, the existing solutions used are not optimal. The fast solution are not accurate and do not give an optimal treatment planning. On the other hand, the accurate solutions are far too slow to be used in a clinical context. Our algorithm answers to this problem by bringing rapidity and accuracy. The concept is to use a neural network adequately learned together with a mechanism taking into account the environment changes. The advantages of this algorithm is to avoid the use of a complex physical code while keeping a good accuracy and reasonable computation times. (author)

  11. Inhibition of Sirt1 promotes neural progenitors toward motoneuron differentiation from human embryonic stem cells

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yun; Wang, Jing [Department of Neurology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing 100191 (China); Clinical Stem Cell Center, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing 100191 (China); Chen, Guian [Clinical Stem Cell Center, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing 100191 (China); Reproductive Medical Center, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing 100191 (China); Fan, Dongsheng, E-mail: dsfan@yahoo.cn [Department of Neurology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing 100191 (China); Clinical Stem Cell Center, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing 100191 (China); Deng, Min, E-mail: dengmin1706@yahoo.com.cn [Department of Neurology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing 100191 (China); Clinical Stem Cell Center, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing 100191 (China)

    2011-01-14

    Research highlights: {yields} Nicotinamide inhibit Sirt1. {yields} MASH1 and Ngn2 activation. {yields} Increase the expression of HB9. {yields} Motoneurons formation increases significantly. -- Abstract: Several protocols direct human embryonic stem cells (hESCs) toward differentiation into functional motoneurons, but the efficiency of motoneuron generation varies based on the human ESC line used. We aimed to develop a novel protocol to increase the formation of motoneurons from human ESCs. In this study, we tested a nuclear histone deacetylase protein, Sirt1, to promote neural precursor cell (NPC) development during differentiation of human ESCs into motoneurons. A specific inhibitor of Sirt1, nicotinamide, dramatically increased motoneuron formation. We found that about 60% of the cells from the total NPCs expressed HB9 and {beta}III-tubulin, commonly used motoneuronal markers found in neurons derived from ESCs following nicotinamide treatment. Motoneurons derived from ESC expressed choline acetyltransferase (ChAT), a positive marker of mature motoneuron. Moreover, we also examined the transcript levels of Mash1, Ngn2, and HB9 mRNA in the differentiated NPCs treated with the Sirt1 activator resveratrol (50 {mu}M) or inhibitor nicotinamide (100 {mu}M). The levels of Mash1, Ngn2, and HB9 mRNA were significantly increased after nicotinamide treatment compared with control groups, which used the traditional protocol. These results suggested that increasing Mash1 and Ngn2 levels by inhibiting Sirt1 could elevate HB9 expression, which promotes motoneuron differentiation. This study provides an alternative method for the production of transplantable motoneurons, a key requirement in the development of hESC-based cell therapy in motoneuron disease.

  12. Inhibition of Sirt1 promotes neural progenitors toward motoneuron differentiation from human embryonic stem cells

    International Nuclear Information System (INIS)

    Zhang, Yun; Wang, Jing; Chen, Guian; Fan, Dongsheng; Deng, Min

    2011-01-01

    Research highlights: → Nicotinamide inhibit Sirt1. → MASH1 and Ngn2 activation. → Increase the expression of HB9. → Motoneurons formation increases significantly. -- Abstract: Several protocols direct human embryonic stem cells (hESCs) toward differentiation into functional motoneurons, but the efficiency of motoneuron generation varies based on the human ESC line used. We aimed to develop a novel protocol to increase the formation of motoneurons from human ESCs. In this study, we tested a nuclear histone deacetylase protein, Sirt1, to promote neural precursor cell (NPC) development during differentiation of human ESCs into motoneurons. A specific inhibitor of Sirt1, nicotinamide, dramatically increased motoneuron formation. We found that about 60% of the cells from the total NPCs expressed HB9 and βIII-tubulin, commonly used motoneuronal markers found in neurons derived from ESCs following nicotinamide treatment. Motoneurons derived from ESC expressed choline acetyltransferase (ChAT), a positive marker of mature motoneuron. Moreover, we also examined the transcript levels of Mash1, Ngn2, and HB9 mRNA in the differentiated NPCs treated with the Sirt1 activator resveratrol (50 μM) or inhibitor nicotinamide (100 μM). The levels of Mash1, Ngn2, and HB9 mRNA were significantly increased after nicotinamide treatment compared with control groups, which used the traditional protocol. These results suggested that increasing Mash1 and Ngn2 levels by inhibiting Sirt1 could elevate HB9 expression, which promotes motoneuron differentiation. This study provides an alternative method for the production of transplantable motoneurons, a key requirement in the development of hESC-based cell therapy in motoneuron disease.

  13. Polysaccharides from Ganoderma lucidum Promote Cognitive Function and Neural Progenitor Proliferation in Mouse Model of Alzheimer's Disease.

    Science.gov (United States)

    Huang, Shichao; Mao, Jianxin; Ding, Kan; Zhou, Yue; Zeng, Xianglu; Yang, Wenjuan; Wang, Peipei; Zhao, Cun; Yao, Jian; Xia, Peng; Pei, Gang

    2017-01-10

    Promoting neurogenesis is a promising strategy for the treatment of cognition impairment associated with Alzheimer's disease (AD). Ganoderma lucidum is a revered medicinal mushroom for health-promoting benefits in the Orient. Here, we found that oral administration of the polysaccharides and water extract from G. lucidum promoted neural progenitor cell (NPC) proliferation to enhance neurogenesis and alleviated cognitive deficits in transgenic AD mice. G. lucidum polysaccharides (GLP) also promoted self-renewal of NPC in cell culture. Further mechanistic study revealed that GLP potentiated activation of fibroblast growth factor receptor 1 (FGFR1) and downstream extracellular signal-regulated kinase (ERK) and AKT cascades. Consistently, inhibition of FGFR1 effectively blocked the GLP-promoted NPC proliferation and activation of the downstream cascades. Our findings suggest that GLP could serve as a regenerative therapeutic agent for the treatment of cognitive decline associated with neurodegenerative diseases. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Creating the learning situation to promote student deep learning: Data analysis and application case

    Science.gov (United States)

    Guo, Yuanyuan; Wu, Shaoyan

    2017-05-01

    How to lead students to deeper learning and cultivate engineering innovative talents need to be studied for higher engineering education. In this study, through the survey data analysis and theoretical research, we discuss the correlation of teaching methods, learning motivation, and learning methods. In this research, we find that students have different motivation orientation according to the perception of teaching methods in the process of engineering education, and this affects their choice of learning methods. As a result, creating situations is critical to lead students to deeper learning. Finally, we analyze the process of learning situational creation in the teaching process of «bidding and contract management workshops». In this creation process, teachers use the student-centered teaching to lead students to deeper study. Through the study of influence factors of deep learning process, and building the teaching situation for the purpose of promoting deep learning, this thesis provide a meaningful reference for enhancing students' learning quality, teachers' teaching quality and the quality of innovation talent.

  15. Dynamic Learning from Adaptive Neural Control of Uncertain Robots with Guaranteed Full-State Tracking Precision

    Directory of Open Access Journals (Sweden)

    Min Wang

    2017-01-01

    Full Text Available A dynamic learning method is developed for an uncertain n-link robot with unknown system dynamics, achieving predefined performance attributes on the link angular position and velocity tracking errors. For a known nonsingular initial robotic condition, performance functions and unconstrained transformation errors are employed to prevent the violation of the full-state tracking error constraints. By combining two independent Lyapunov functions and radial basis function (RBF neural network (NN approximator, a novel and simple adaptive neural control scheme is proposed for the dynamics of the unconstrained transformation errors, which guarantees uniformly ultimate boundedness of all the signals in the closed-loop system. In the steady-state control process, RBF NNs are verified to satisfy the partial persistent excitation (PE condition. Subsequently, an appropriate state transformation is adopted to achieve the accurate convergence of neural weight estimates. The corresponding experienced knowledge on unknown robotic dynamics is stored in NNs with constant neural weight values. Using the stored knowledge, a static neural learning controller is developed to improve the full-state tracking performance. A comparative simulation study on a 2-link robot illustrates the effectiveness of the proposed scheme.

  16. Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System

    Science.gov (United States)

    Williams-Hayes, Peggy S.

    2004-01-01

    The NASA F-15 Intelligent Flight Control System project team developed a series of flight control concepts designed to demonstrate neural network-based adaptive controller benefits, with the objective to develop and flight-test control systems using neural network technology to optimize aircraft performance under nominal conditions and stabilize the aircraft under failure conditions. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to baseline aerodynamic derivatives in flight. This open-loop flight test set was performed in preparation for a future phase in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed - pitch frequency sweep and automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. Flight data examination shows that addition of flight-identified aerodynamic derivative increments into the simulation improved aircraft pitch handling qualities.

  17. Picasso: A Modular Framework for Visualizing the Learning Process of Neural Network Image Classifiers

    Directory of Open Access Journals (Sweden)

    Ryan Henderson

    2017-09-01

    Full Text Available Picasso is a free open-source (Eclipse Public License web application written in Python for rendering standard visualizations useful for analyzing convolutional neural networks. Picasso ships with occlusion maps and saliency maps, two visualizations which help reveal issues that evaluation metrics like loss and accuracy might hide: for example, learning a proxy classification task. Picasso works with the Tensorflow deep learning framework, and Keras (when the model can be loaded into the Tensorflow backend. Picasso can be used with minimal configuration by deep learning researchers and engineers alike across various neural network architectures. Adding new visualizations is simple: the user can specify their visualization code and HTML template separately from the application code.

  18. Probabilistic neural network playing and learning Tic-Tac-Toe

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Somol, Petr; Pudil, Pavel

    2005-01-01

    Roč. 26, č. 12 (2005), s. 1866-1873 ISSN 0167-8655 R&D Projects: GA ČR GA402/02/1271; GA ČR GA402/03/1310; GA MŠk 1M0572 Grant - others:Comission EU(XE) FP6-507772 Institutional research plan: CEZ:AV0Z10750506 Keywords : neural networks * distribution mixtures * playing game s Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.138, year: 2005

  19. Biologically-inspired On-chip Learning in Pulsed Neural Networks

    DEFF Research Database (Denmark)

    Lehmann, Torsten; Woodburn, Robin

    1999-01-01

    Self-learning chips to implement many popular ANN (artificial neural network) algorithms are very difficult to design. We explain why this is so and say what lessons previous work teaches us in the design of self-learning systems. We offer a contribution to the "biologically-inspired" approach......, explaining what we mean by this term and providing an example of a robust, self-learning design that can solve simple classical-conditioning tasks, We give details of the design of individual circuits to perform component functions, which can then be combined into a network to solve the task. We argue...

  20. Three-dimensional neural net for learning visuomotor coordination of a robot arm.

    Science.gov (United States)

    Martinetz, T M; Ritter, H J; Schulten, K J

    1990-01-01

    An extension of T. Kohonen's (1982) self-organizing mapping algorithm together with an error-correction scheme based on the Widrow-Hoff learning rule is applied to develop a learning algorithm for the visuomotor coordination of a simulated robot arm. Learning occurs by a sequence of trial movements without the need for an external teacher. Using input signals from a pair of cameras, the closed robot arm system is able to reduce its positioning error to about 0.3% of the linear dimensions of its work space. This is achieved by choosing the connectivity of a three-dimensional lattice consisting of the units of the neural net.

  1. The neural basis of reversal learning: An updated perspective

    Science.gov (United States)

    Izquierdo, Alicia; Brigman, Jonathan L.; Radke, Anna K.; Rudebeck, Peter H.; Holmes, Andrew

    2016-01-01

    Reversal learning paradigms are among the most widely used tests of cognitive flexibility and have been used as assays, across species, for altered cognitive processes in a host of neuropsychiatric conditions. Based on recent studies in humans, non-human primates, and rodents, the notion that reversal learning tasks primarily measure response inhibition, has been revised. In this review, we describe how cognitive flexibility is measured by reversal learning and discuss new definitions of the construct validity of the task that are serving as an heuristic to guide future research in this field. We also provide an update on the available evidence implicating certain cortical and subcortical brain regions in the mediation of reversal learning, and an overview of the principle neurotransmitter systems involved. PMID:26979052

  2. Network Enabled - Unresolved Residual Analysis and Learning (NEURAL)

    Science.gov (United States)

    Temple, D.; Poole, M.; Camp, M.

    Since the advent of modern computational capacity, machine learning algorithms and techniques have served as a method through which to solve numerous challenging problems. However, for machine learning methods to be effective and robust, sufficient data sets must be available; specifically, in the space domain, these are generally difficult to acquire. Rapidly evolving commercial space-situational awareness companies boast the capability to collect hundreds of thousands nightly observations of resident space objects (RSOs) using a ground-based optical sensor network. This provides the ability to maintain custody of and characterize thousands of objects persistently. With this information available, novel deep learning techniques can be implemented. The technique discussed in this paper utilizes deep learning to make distinctions between nightly data collects with and without maneuvers. Implementation of these techniques will allow the data collected from optical ground-based networks to enable well informed and timely the space domain decision making.

  3. "FORCE" learning in recurrent neural networks as data assimilation

    Science.gov (United States)

    Duane, Gregory S.

    2017-12-01

    It is shown that the "FORCE" algorithm for learning in arbitrarily connected networks of simple neuronal units can be cast as a Kalman Filter, with a particular state-dependent form for the background error covariances. The resulting interpretation has implications for initialization of the learning algorithm, leads to an extension to include interactions between the weight updates for different neurons, and can represent relationships within groups of multiple target output signals.

  4. Learning Based on CC1 and CC4 Neural Networks

    OpenAIRE

    Kak, Subhash

    2017-01-01

    We propose that a general learning system should have three kinds of agents corresponding to sensory, short-term, and long-term memory that implicitly will facilitate context-free and context-sensitive aspects of learning. These three agents perform mututally complementary functions that capture aspects of the human cognition system. We investigate the use of CC1 and CC4 networks for use as models of short-term and sensory memory.

  5. Explaining neural signals in human visual cortex with an associative learning model.

    Science.gov (United States)

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  6. Learning Perfectly Secure Cryptography to Protect Communications with Adversarial Neural Cryptography

    Directory of Open Access Journals (Sweden)

    Murilo Coutinho

    2018-04-01

    Full Text Available Researches in Artificial Intelligence (AI have achieved many important breakthroughs, especially in recent years. In some cases, AI learns alone from scratch and performs human tasks faster and better than humans. With the recent advances in AI, it is natural to wonder whether Artificial Neural Networks will be used to successfully create or break cryptographic algorithms. Bibliographic review shows the main approach to this problem have been addressed throughout complex Neural Networks, but without understanding or proving the security of the generated model. This paper presents an analysis of the security of cryptographic algorithms generated by a new technique called Adversarial Neural Cryptography (ANC. Using the proposed network, we show limitations and directions to improve the current approach of ANC. Training the proposed Artificial Neural Network with the improved model of ANC, we show that artificially intelligent agents can learn the unbreakable One-Time Pad (OTP algorithm, without human knowledge, to communicate securely through an insecure communication channel. This paper shows in which conditions an AI agent can learn a secure encryption scheme. However, it also shows that, without a stronger adversary, it is more likely to obtain an insecure one.

  7. Learning Perfectly Secure Cryptography to Protect Communications with Adversarial Neural Cryptography.

    Science.gov (United States)

    Coutinho, Murilo; de Oliveira Albuquerque, Robson; Borges, Fábio; García Villalba, Luis Javier; Kim, Tai-Hoon

    2018-04-24

    Researches in Artificial Intelligence (AI) have achieved many important breakthroughs, especially in recent years. In some cases, AI learns alone from scratch and performs human tasks faster and better than humans. With the recent advances in AI, it is natural to wonder whether Artificial Neural Networks will be used to successfully create or break cryptographic algorithms. Bibliographic review shows the main approach to this problem have been addressed throughout complex Neural Networks, but without understanding or proving the security of the generated model. This paper presents an analysis of the security of cryptographic algorithms generated by a new technique called Adversarial Neural Cryptography (ANC). Using the proposed network, we show limitations and directions to improve the current approach of ANC. Training the proposed Artificial Neural Network with the improved model of ANC, we show that artificially intelligent agents can learn the unbreakable One-Time Pad (OTP) algorithm, without human knowledge, to communicate securely through an insecure communication channel. This paper shows in which conditions an AI agent can learn a secure encryption scheme. However, it also shows that, without a stronger adversary, it is more likely to obtain an insecure one.

  8. Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks.

    Science.gov (United States)

    Gardner, Brian; Sporea, Ioana; Grüning, André

    2015-12-01

    Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we examine how multilayered networks of spiking neurons can learn to encode for input patterns using a fully temporal coding scheme. To this end, we introduce a new supervised learning rule, MultilayerSpiker, that can train spiking networks containing hidden layer neurons to perform transformations between spatiotemporal input and output spike patterns. The performance of the proposed learning rule is demonstrated in terms of the number of pattern mappings it can learn, the complexity of network structures it can be used on, and its classification accuracy when using multispike-based encodings. In particular, the learning rule displays robustness against input noise and can generalize well on an example data set. Our approach contributes to both a systematic understanding of how computations might take place in the nervous system and a learning rule that displays strong technical capability.

  9. Learning quadratic receptive fields from neural responses to natural stimuli.

    Science.gov (United States)

    Rajan, Kanaka; Marre, Olivier; Tkačik, Gašper

    2013-07-01

    Models of neural responses to stimuli with complex spatiotemporal correlation structure often assume that neurons are selective for only a small number of linear projections of a potentially high-dimensional input. In this review, we explore recent modeling approaches where the neural response depends on the quadratic form of the input rather than on its linear projection, that is, the neuron is sensitive to the local covariance structure of the signal preceding the spike. To infer this quadratic dependence in the presence of arbitrary (e.g., naturalistic) stimulus distribution, we review several inference methods, focusing in particular on two information theory-based approaches (maximization of stimulus energy and of noise entropy) and two likelihood-based approaches (Bayesian spike-triggered covariance and extensions of generalized linear models). We analyze the formal relationship between the likelihood-based and information-based approaches to demonstrate how they lead to consistent inference. We demonstrate the practical feasibility of these procedures by using model neurons responding to a flickering variance stimulus.

  10. Possible promotion of neuronal differentiation in fetal rat brain neural progenitor cells after sustained exposure to static magnetism.

    Science.gov (United States)

    Nakamichi, Noritaka; Ishioka, Yukichi; Hirai, Takao; Ozawa, Shusuke; Tachibana, Masaki; Nakamura, Nobuhiro; Takarada, Takeshi; Yoneda, Yukio

    2009-08-15

    We have previously shown significant potentiation of Ca(2+) influx mediated by N-methyl-D-aspartate receptors, along with decreased microtubules-associated protein-2 (MAP2) expression, in hippocampal neurons cultured under static magnetism without cell death. In this study, we investigated the effects of static magnetism on the functionality of neural progenitor cells endowed to proliferate for self-replication and differentiate into neuronal, astroglial, and oligodendroglial lineages. Neural progenitor cells were isolated from embryonic rat neocortex and hippocampus, followed by culture under static magnetism at 100 mT and subsequent determination of the number of cells immunoreactive for a marker protein of particular progeny lineages. Static magnetism not only significantly decreased proliferation of neural progenitor cells without affecting cell viability, but also promoted differentiation into cells immunoreactive for MAP2 with a concomitant decrease in that for an astroglial marker, irrespective of the presence of differentiation inducers. In neural progenitors cultured under static magnetism, a significant increase was seen in mRNA expression of several activator-type proneural genes, such as Mash1, Math1, and Math3, together with decreased mRNA expression of the repressor type Hes5. These results suggest that sustained static magnetism could suppress proliferation for self-renewal and facilitate differentiation into neurons through promoted expression of activator-type proneural genes by progenitor cells in fetal rat brain.

  11. A Self-Organizing Incremental Neural Network based on local distribution learning.

    Science.gov (United States)

    Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi

    2016-12-01

    In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Dissociable neural representations of reinforcement and belief prediction errors underlie strategic learning.

    Science.gov (United States)

    Zhu, Lusha; Mathewson, Kyle E; Hsu, Ming

    2012-01-31

    Decision-making in the presence of other competitive intelligent agents is fundamental for social and economic behavior. Such decisions require agents to behave strategically, where in addition to learning about the rewards and punishments available in the environment, they also need to anticipate and respond to actions of others competing for the same rewards. However, whereas we know much about strategic learning at both theoretical and behavioral levels, we know relatively little about the underlying neural mechanisms. Here, we show using a multi-strategy competitive learning paradigm that strategic choices can be characterized by extending the reinforcement learning (RL) framework to incorporate agents' beliefs about the actions of their opponents. Furthermore, using this characterization to generate putative internal values, we used model-based functional magnetic resonance imaging to investigate neural computations underlying strategic learning. We found that the distinct notions of prediction errors derived from our computational model are processed in a partially overlapping but distinct set of brain regions. Specifically, we found that the RL prediction error was correlated with activity in the ventral striatum. In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning. These results suggest a model of strategic behavior where learning arises from interaction of dissociable reinforcement and belief-based inputs.

  13. The conditions that promote fear learning: prediction error and Pavlovian fear conditioning.

    Science.gov (United States)

    Li, Susan Shi Yuan; McNally, Gavan P

    2014-02-01

    A key insight of associative learning theory is that learning depends on the actions of prediction error: a discrepancy between the actual and expected outcomes of a conditioning trial. When positive, such error causes increments in associative strength and, when negative, such error causes decrements in associative strength. Prediction error can act directly on fear learning by determining the effectiveness of the aversive unconditioned stimulus or indirectly by determining the effectiveness, or associability, of the conditioned stimulus. Evidence from a variety of experimental preparations in human and non-human animals suggest that discrete neural circuits code for these actions of prediction error during fear learning. Here we review the circuits and brain regions contributing to the neural coding of prediction error during fear learning and highlight areas of research (safety learning, extinction, and reconsolidation) that may profit from this approach to understanding learning. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  14. Neural classifiers for learning higher-order correlations

    International Nuclear Information System (INIS)

    Gueler, M.

    1999-01-01

    Studies by various authors suggest that higher-order networks can be more powerful and biologically more plausible with respect to the more traditional multilayer networks. These architecture make explicit use of nonlinear interactions between input variables in the form of higher-order units or product units. If it is known a priori that the problem to be implemented possesses a given set of invariances like in the translation, rotation, and scale invariant recognition problems, those invariances can be encoded, thus eliminating all higher-order terms which are incompatible with the invariances. In general, however, it is a serious set-back that the complexity of learning increases exponentially with the size of inputs. This paper reviews higher-order networks and introduces an implicit representation in which learning complexity is mainly decided by the number of higher-order terms to be learned and increases only linearly with the input size

  15. Neural Classifiers for Learning Higher-Order Correlations

    Science.gov (United States)

    Güler, Marifi

    1999-01-01

    Studies by various authors suggest that higher-order networks can be more powerful and are biologically more plausible with respect to the more traditional multilayer networks. These architectures make explicit use of nonlinear interactions between input variables in the form of higher-order units or product units. If it is known a priori that the problem to be implemented possesses a given set of invariances like in the translation, rotation, and scale invariant pattern recognition problems, those invariances can be encoded, thus eliminating all higher-order terms which are incompatible with the invariances. In general, however, it is a serious set-back that the complexity of learning increases exponentially with the size of inputs. This paper reviews higher-order networks and introduces an implicit representation in which learning complexity is mainly decided by the number of higher-order terms to be learned and increases only linearly with the input size.

  16. Bridging Cognitive And Neural Aspects Of Classroom Learning

    Science.gov (United States)

    Posner, Michael I.

    2009-11-01

    A major achievement of the first twenty years of neuroimaging is to reveal the brain networks that underlie fundamental aspects of attention, memory and expertise. We examine some principles underlying the activation of these networks. These networks represent key constraints for the design of teaching. Individual differences in these networks reflect a combination of genes and experiences. While acquiring expertise is easier for some than others the importance of effort in its acquisition is a basic principle. Networks are strengthened through exercise, but maintaining interest that produces sustained attention is key to making exercises successful. The state of the brain prior to learning may also represent an important constraint on successful learning and some interventions designed to investigate the role of attention state in learning are discussed. Teaching remains a creative act between instructor and student, but an understanding of brain mechanisms might improve opportunity for success for both participants.

  17. Electroacupuncture Promotes Proliferation of Amplifying Neural Progenitors and Preserves Quiescent Neural Progenitors from Apoptosis to Alleviate Depressive-Like and Anxiety-Like Behaviours

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2014-01-01

    Full Text Available The present study was designed to investigate the effects of electroacupuncture (EA on depressive-like and anxiety-like behaviours and neural progenitors in the hippocampal dentate gyrus (DG in a chronic unpredictable stress (CUS rat model of depression. After being exposed to a CUS procedure for 2 weeks, rats were subjected to EA treatment, which was performed on acupoints Du-20 (Bai-Hui and GB-34 (Yang-Ling-Quan, once every other day for 15 consecutive days (including 8 treatments, with each treatment lasting for 30 min. The behavioural tests (i.e., forced swimming test, elevated plus-maze test, and open-field entries test revealed that EA alleviated the depressive-like and anxiety-like behaviours of the stressed rats. Immunohistochemical results showed that proliferative cells (BrdU-positive in the EA group were significantly larger in number compared with the Model group. Further, the results showed that EA significantly promoted the proliferation of amplifying neural progenitors (ANPs and simultaneously inhibited the apoptosis of quiescent neural progenitors (QNPs. In a word, the mechanism underlying the antidepressant-like effects of EA is associated with enhancement of ANPs proliferation and preserving QNPs from apoptosis.

  18. Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model

    Science.gov (United States)

    Panda, Priyadarshini; Srinivasa, Narayan

    2018-01-01

    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models. PMID:29551962

  19. Have we met before? Neural correlates of emotional learning in women with social phobia.

    Science.gov (United States)

    Laeger, Inga; Keuper, Kati; Heitmann, Carina; Kugel, Harald; Dobel, Christian; Eden, Annuschka; Arolt, Volker; Zwitserlood, Pienie; Dannlowski, Udo; Zwanzger, Peter

    2014-05-01

    Altered memory processes are thought to be a key mechanism in the etiology of anxiety disorders, but little is known about the neural correlates of fear learning and memory biases in patients with social phobia. The present study therefore examined whether patients with social phobia exhibit different patterns of neural activation when confronted with recently acquired emotional stimuli. Patients with social phobia and a group of healthy controls learned to associate pseudonames with pictures of persons displaying either a fearful or a neutral expression. The next day, participants read the pseudonames in the magnetic resonance imaging scanner. Afterwards, 2 memory tests were carried out. We enrolled 21 patients and 21 controls in our study. There were no group differences for learning performance, and results of the memory tests were mixed. On a neural level, patients showed weaker amygdala activation than controls for the contrast of names previously associated with fearful versus neutral faces. Social phobia severity was negatively related to amygdala activation. Moreover, a detailed psychophysiological interaction analysis revealed an inverse correlation between disorder severity and frontolimbic connectivity for the emotional > neutral pseudonames contrast. Our sample included only women. Our results support the theory of a disturbed cortico limbic interplay, even for recently learned emotional stimuli. We discuss the findings with regard to the vigilance-avoidance theory and contrast them to results indicating an oversensitive limbic system in patients with social phobia.

  20. Learning representations for the early detection of sepsis with deep neural networks.

    Science.gov (United States)

    Kam, Hye Jin; Kim, Ha Young

    2017-10-01

    Sepsis is one of the leading causes of death in intensive care unit patients. Early detection of sepsis is vital because mortality increases as the sepsis stage worsens. This study aimed to develop detection models for the early stage of sepsis using deep learning methodologies, and to compare the feasibility and performance of the new deep learning methodology with those of the regression method with conventional temporal feature extraction. Study group selection adhered to the InSight model. The results of the deep learning-based models and the InSight model were compared. With deep feedforward networks, the area under the ROC curve (AUC) of the models were 0.887 and 0.915 for the InSight and the new feature sets, respectively. For the model with the combined feature set, the AUC was the same as that of the basic feature set (0.915). For the long short-term memory model, only the basic feature set was applied and the AUC improved to 0.929 compared with the existing 0.887 of the InSight model. The contributions of this paper can be summarized in three ways: (i) improved performance without feature extraction using domain knowledge, (ii) verification of feature extraction capability of deep neural networks through comparison with reference features, and (iii) improved performance with feedforward neural networks using long short-term memory, a neural network architecture that can learn sequential patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Neural plasticity underlying visual perceptual learning in aging.

    Science.gov (United States)

    Mishra, Jyoti; Rolle, Camarin; Gazzaley, Adam

    2015-07-01

    Healthy aging is associated with a decline in basic perceptual abilities, as well as higher-level cognitive functions such as working memory. In a recent perceptual training study using moving sweeps of Gabor stimuli, Berry et al. (2010) observed that older adults significantly improved discrimination abilities on the most challenging perceptual tasks that presented paired sweeps at rapid rates of 5 and 10 Hz. Berry et al. further showed that this perceptual training engendered transfer-of-benefit to an untrained working memory task. Here, we investigated the neural underpinnings of the improvements in these perceptual tasks, as assessed by event-related potential (ERP) recordings. Early visual ERP components time-locked to stimulus onset were compared pre- and post-training, as well as relative to a no-contact control group. The visual N1 and N2 components were significantly enhanced after training, and the N1 change correlated with improvements in perceptual discrimination on the task. Further, the change observed for the N1 and N2 was associated with the rapidity of the perceptual challenge; the visual N1 (120-150 ms) was enhanced post-training for 10 Hz sweep pairs, while the N2 (240-280 ms) was enhanced for the 5 Hz sweep pairs. We speculate that these observed post-training neural enhancements reflect improvements by older adults in the allocation of attention that is required to accurately dissociate perceptually overlapping stimuli when presented in rapid sequence. This article is part of a Special Issue entitled SI: Memory Å. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. From phonemes to images : levels of representation in a recurrent neural model of visually-grounded language learning

    NARCIS (Netherlands)

    Gelderloos, L.J.; Chrupala, Grzegorz

    2016-01-01

    We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover

  3. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection.

    Science.gov (United States)

    Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho

    2017-03-01

    Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Deep neural networks for direct, featureless learning through observation: The case of two-dimensional spin models

    Science.gov (United States)

    Mills, Kyle; Tamblyn, Isaac

    2018-03-01

    We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4 ×4 Ising model. Using its success at this task, we motivate the study of the larger 8 ×8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration.

  5. Advocacy, promotion and e-learning: Supercourse for zoonosis.

    Science.gov (United States)

    Matibag, Gino C; Igarashi, Manabu; La Porte, Ron E; Tamashiro, Hiko

    2005-09-01

    This paper discusses the history of emerging infectious diseases, risk communication and perception, and the Supercourse lectures as means to strengthen the concepts and definition of risk management and global governance of zoonosis. The paper begins by outlining some of the key themes and issues in infectious diseases, highlighting the way which historical analysis challenges ideas of the 'newness' of some of these developments. It then discusses the role of risk communication to public accountability. The bulk of the paper presents an overview of developments of the Internet-based learning system through the Supercourse lectures that may prove to be a strong arm for the promotion of the latest medical information particularly to developing countries.

  6. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    Science.gov (United States)

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  7. Differential neural substrates of working memory and cognitive skill learning in healthy young volunteers

    International Nuclear Information System (INIS)

    Cho, Sang Soo; Lee, Eun Ju; Yoon, Eun Jin; Kim, Yu Kyeong; Lee, Won Woo; Kim, Sang Eun

    2005-01-01

    It is known that different neural circuits are involved in working memory and cognitive skill learning that represent explicit and implicit memory functions, respectively. In the present study, we investigated the metabolic correlates of working memory and cognitive skill learning with correlation analysis of FDG PET images. Fourteen right-handed healthy subjects (age, 24 ± 2 yr; 5 males and 9 females) underwent brain FDG PET and neuropsychological testing. Two-back task and weather prediction task were used for the evaluation of working memory and cognitive skill learning, respectively, Correlation between regional glucose metabolism and cognitive task performance was examined using SPM99. A significant positive correlation between 2-back task performance and regional glucose metabolism was found in the prefrontal regions and superior temporal gyri bilaterally. In the first term of weather prediction task the task performance correlated positively with glucose metabolism in the bilateral prefrontal areas, left middle temporal and posterior cingulate gyri, and left thalamus. In the second and third terms of the task, the correlation found in the prefrontal areas, superior temporal and anterior cingulate gyri bilaterally, right insula, left parahippocampal gyrus, and right caudate nucleus. We identified the neural substrates that are related with performance of working memory and cognitive skill learning. These results indicate that brain regions associated with the explicit memory system are recruited in early periods of cognitive skill learning, but additional brain regions including caudate nucleus are involved in late periods of cognitive skill learning

  8. Differential neural substrates of working memory and cognitive skill learning in healthy young volunteers

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sang Soo; Lee, Eun Ju; Yoon, Eun Jin; Kim, Yu Kyeong; Lee, Won Woo; Kim, Sang Eun [Seoul National Univ. College of Medicine, Seoul (Korea, Republic of)

    2005-07-01

    It is known that different neural circuits are involved in working memory and cognitive skill learning that represent explicit and implicit memory functions, respectively. In the present study, we investigated the metabolic correlates of working memory and cognitive skill learning with correlation analysis of FDG PET images. Fourteen right-handed healthy subjects (age, 24 {+-} 2 yr; 5 males and 9 females) underwent brain FDG PET and neuropsychological testing. Two-back task and weather prediction task were used for the evaluation of working memory and cognitive skill learning, respectively, Correlation between regional glucose metabolism and cognitive task performance was examined using SPM99. A significant positive correlation between 2-back task performance and regional glucose metabolism was found in the prefrontal regions and superior temporal gyri bilaterally. In the first term of weather prediction task the task performance correlated positively with glucose metabolism in the bilateral prefrontal areas, left middle temporal and posterior cingulate gyri, and left thalamus. In the second and third terms of the task, the correlation found in the prefrontal areas, superior temporal and anterior cingulate gyri bilaterally, right insula, left parahippocampal gyrus, and right caudate nucleus. We identified the neural substrates that are related with performance of working memory and cognitive skill learning. These results indicate that brain regions associated with the explicit memory system are recruited in early periods of cognitive skill learning, but additional brain regions including caudate nucleus are involved in late periods of cognitive skill learning.

  9. The Role of Higher Education in Promoting Lifelong Learning. UIL Publication Series on Lifelong Learning Policies and Strategies: No. 3

    Science.gov (United States)

    Yang, Jin, Ed.; Schneller, Chripa, Ed.; Roche, Stephen, Ed.

    2015-01-01

    There is no doubt that universities have a vital role to play in promoting lifelong learning. This publication presents possible ways of expanding and transforming higher education to facilitate lifelong learning in different socio-economic contexts. Nine articles address the various dimensions of the role of higher education in promoting lifelong…

  10. Learning to read words in a new language shapes the neural organization of the prior languages.

    Science.gov (United States)

    Mei, Leilei; Xue, Gui; Lu, Zhong-Lin; Chen, Chuansheng; Zhang, Mingxia; He, Qinghua; Wei, Miao; Dong, Qi

    2014-12-01

    Learning a new language entails interactions with one׳s prior language(s). Much research has shown how native language affects the cognitive and neural mechanisms of a new language, but little is known about whether and how learning a new language shapes the neural mechanisms of prior language(s). In two experiments in the current study, we used an artificial language training paradigm in combination with an fMRI to examine (1) the effects of different linguistic components (phonology and semantics) of a new language on the neural process of prior languages (i.e., native and second languages), and (2) whether such effects were modulated by the proficiency level in the new language. Results of Experiment 1 showed that when the training in a new language involved semantics (as opposed to only visual forms and phonology), neural activity during word reading in the native language (Chinese) was reduced in several reading-related regions, including the left pars opercularis, pars triangularis, bilateral inferior temporal gyrus, fusiform gyrus, and inferior occipital gyrus. Results of Experiment 2 replicated the results of Experiment 1 and further found that semantic training also affected neural activity during word reading in the subjects׳ second language (English). Furthermore, we found that the effects of the new language were modulated by the subjects׳ proficiency level in the new language. These results provide critical imaging evidence for the influence of learning to read words in a new language on word reading in native and second languages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Biologically plausible learning in neural networks: a lesson from bacterial chemotaxis.

    Science.gov (United States)

    Shimansky, Yury P

    2009-12-01

    Learning processes in the brain are usually associated with plastic changes made to optimize the strength of connections between neurons. Although many details related to biophysical mechanisms of synaptic plasticity have been discovered, it is unclear how the concurrent performance of adaptive modifications in a huge number of spatial locations is organized to minimize a given objective function. Since direct experimental observation of even a relatively small subset of such changes is not feasible, computational modeling is an indispensable investigation tool for solving this problem. However, the conventional method of error back-propagation (EBP) employed for optimizing synaptic weights in artificial neural networks is not biologically plausible. This study based on computational experiments demonstrated that such optimization can be performed rather efficiently using the same general method that bacteria employ for moving closer to an attractant or away from a repellent. With regard to neural network optimization, this method consists of regulating the probability of an abrupt change in the direction of synaptic weight modification according to the temporal gradient of the objective function. Neural networks utilizing this method (regulation of modification probability, RMP) can be viewed as analogous to swimming in the multidimensional space of their parameters in the flow of biochemical agents carrying information about the optimality criterion. The efficiency of RMP is comparable to that of EBP, while RMP has several important advantages. Since the biological plausibility of RMP is beyond a reasonable doubt, the RMP concept provides a constructive framework for the experimental analysis of learning in natural neural networks.

  12. Adaptive neural network/expert system that learns fault diagnosis for different structures

    Science.gov (United States)

    Simon, Solomon H.

    1992-08-01

    Corporations need better real-time monitoring and control systems to improve productivity by watching quality and increasing production flexibility. The innovative technology to achieve this goal is evolving in the form artificial intelligence and neural networks applied to sensor processing, fusion, and interpretation. By using these advanced Al techniques, we can leverage existing systems and add value to conventional techniques. Neural networks and knowledge-based expert systems can be combined into intelligent sensor systems which provide real-time monitoring, control, evaluation, and fault diagnosis for production systems. Neural network-based intelligent sensor systems are more reliable because they can provide continuous, non-destructive monitoring and inspection. Use of neural networks can result in sensor fusion and the ability to model highly, non-linear systems. Improved models can provide a foundation for more accurate performance parameters and predictions. We discuss a research software/hardware prototype which integrates neural networks, expert systems, and sensor technologies and which can adapt across a variety of structures to perform fault diagnosis. The flexibility and adaptability of the prototype in learning two structures is presented. Potential applications are discussed.

  13. Age-related difference in the effective neural connectivity associated with probabilistic category learning

    International Nuclear Information System (INIS)

    Yoon, Eun Jin; Cho, Sang Soo; Kim, Hee Jung; Bang, Seong Ae; Park, Hyun Soo; Kim, Yu Kyeong; Kim, Sang Eun

    2007-01-01

    Although it is well known that explicit memory is affected by the deleterious changes in brain with aging, but effect of aging in implicit memory such as probabilistic category learning (PCL) is not clear. To identify the effect of aging on the neural interaction for successful PCL, we investigated the neural substrates of PCL and the age-related changes of the neural network between these brain regions. 23 young (age, 252 y; 11 males) and 14 elderly (673 y; 7 males) healthy subjects underwent FDG PET during a resting state and 150-trial weather prediction (WP) task. Correlations between the WP hit rates and regional glucose metabolism were assessed using SPM2 (P diff (37) = 142.47, P<0.005), Systematic comparisons of each path revealed that frontal crosscallosal and the frontal to parahippocampal connection were most responsible for the model differences (P<0.05). For the successful PCL, the elderly recruits the basal ganglia implicit memory system but MTL recruitment differs from the young. The inadequate MTL correlation pattern in the elderly is may be caused by the changes of the neural pathway related with explicit memory. These neural changes can explain the decreased performance of PCL in elderly subjects

  14. Neural coding of basic reward terms of animal learning theory, game theory, microeconomics and behavioural ecology.

    Science.gov (United States)

    Schultz, Wolfram

    2004-04-01

    Neurons in a small number of brain structures detect rewards and reward-predicting stimuli and are active during the expectation of predictable food and liquid rewards. These neurons code the reward information according to basic terms of various behavioural theories that seek to explain reward-directed learning, approach behaviour and decision-making. The involved brain structures include groups of dopamine neurons, the striatum including the nucleus accumbens, the orbitofrontal cortex and the amygdala. The reward information is fed to brain structures involved in decision-making and organisation of behaviour, such as the dorsolateral prefrontal cortex and possibly the parietal cortex. The neural coding of basic reward terms derived from formal theories puts the neurophysiological investigation of reward mechanisms on firm conceptual grounds and provides neural correlates for the function of rewards in learning, approach behaviour and decision-making.

  15. Growing adaptive machines combining development and learning in artificial neural networks

    CERN Document Server

    Bredeche, Nicolas; Doursat, René

    2014-01-01

    The pursuit of artificial intelligence has been a highly active domain of research for decades, yielding exciting scientific insights and productive new technologies. In terms of generating intelligence, however, this pursuit has yielded only limited success. This book explores the hypothesis that adaptive growth is a means of moving forward. By emulating the biological process of development, we can incorporate desirable characteristics of natural neural systems into engineered designs, and thus move closer towards the creation of brain-like systems. The particular focus is on how to design artificial neural networks for engineering tasks. The book consists of contributions from 18 researchers, ranging from detailed reviews of recent domains by senior scientists, to exciting new contributions representing the state of the art in machine learning research. The book begins with broad overviews of artificial neurogenesis and bio-inspired machine learning, suitable both as an introduction to the domains and as a...

  16. Optimal Search Strategy of Robotic Assembly Based on Neural Vibration Learning

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2011-01-01

    Full Text Available This paper presents implementation of optimal search strategy (OSS in verification of assembly process based on neural vibration learning. The application problem is the complex robot assembly of miniature parts in the example of mating the gears of one multistage planetary speed reducer. Assembly of tube over the planetary gears was noticed as the most difficult problem of overall assembly. The favourable influence of vibration and rotation movement on compensation of tolerance was also observed. With the proposed neural-network-based learning algorithm, it is possible to find extended scope of vibration state parameter. Using optimal search strategy based on minimal distance path between vibration parameter stage sets (amplitude and frequencies of robots gripe vibration and recovery parameter algorithm, we can improve the robot assembly behaviour, that is, allow the fastest possible way of mating. We have verified by using simulation programs that search strategy is suitable for the situation of unexpected events due to uncertainties.

  17. Promoting Science Learning and Scientific Identification through Contemporary Scientific Investigations

    Science.gov (United States)

    Van Horne, Katie

    tools and means of contemporary scientific inquiry allows them to gain conceptual development and proficiency with the scientific practices within the contexts of their lives, in ways that provided access to resources that promoted students' stabilization of practice-linked identities. For teachers implementing this instructional model in their classrooms, it brought up dilemmas and opportunities related to their school contexts and their personal history of instructional practices. The work collectively informs how interest-driven project-based science instruction can happen across a range of school contexts and how such models can support meaningful science learning and identification.

  18. Techniques to Promote Reflective Practice and Empowered Learning.

    Science.gov (United States)

    Nguyen-Truong, Connie Kim Yen; Davis, Andra; Spencer, Cassius; Rasmor, Melody; Dekker, Lida

    2018-02-01

    Health care environments are fraught with fast-paced critical demands and ethical dilemmas requiring decisive nursing actions. Nurse educators must prepare nursing students to practice skills, behaviors, and attitudes needed to meet the challenges of health care demands. Evidence-based, innovative, multimodal techniques with novice and seasoned nurses were incorporated into a baccalaureate (BSN) completion program (RN to-BSN) to deepen learning, complex skill building, reflective practice, teamwork, and compassion toward the experiences of others. Principles of popular education for engaged teaching-learning were applied. Nursing students experience equitable access to content through co-constructing knowledge with four creative techniques. Four creative techniques include poem reading aloud to facilitate connectedness; mindfulness to cultivate self-awareness; string figure activities to demonstrate indigenous knowledge and teamwork; and cartooning difficult subject matter. Nursing school curricula can promote a milieu for developing organizational skills to manage simultaneous priorities, practice reflectively, and develop empathy and the authenticity that effective nursing requires. [J Nurs Educ. 2018;57(2):115-120.]. Copyright 2018, SLACK Incorporated.

  19. Neural signatures of second language learning and control.

    Science.gov (United States)

    Bartolotti, James; Bradley, Kailyn; Hernandez, Arturo E; Marian, Viorica

    2017-04-01

    Experience with multiple languages has unique effects on cortical structure and information processing. Differences in gray matter density and patterns of cortical activation are observed in lifelong bilinguals compared to monolinguals as a result of their experience managing interference across languages. Monolinguals who acquire a second language later in life begin to encounter the same type of linguistic interference as bilinguals, but with a different pre-existing language architecture. The current study used functional magnetic resonance imaging to explore the beginning stages of second language acquisition and cross-linguistic interference in monolingual adults. We found that after English monolinguals learned novel Spanish vocabulary, English and Spanish auditory words led to distinct patterns of cortical activation, with greater recruitment of posterior parietal regions in response to English words and of left hippocampus in response to Spanish words. In addition, cross-linguistic interference from English influenced processing of newly-learned Spanish words, decreasing hippocampus activity. Results suggest that monolinguals may rely on different memory systems to process a newly-learned second language, and that the second language system is sensitive to native language interference. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Supervised Learning Based on Temporal Coding in Spiking Neural Networks.

    Science.gov (United States)

    Mostafa, Hesham

    2017-08-01

    Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.

  1. Machine learning of radial basis function neural network based on Kalman filter: Introduction

    Directory of Open Access Journals (Sweden)

    Vuković Najdan L.

    2014-01-01

    Full Text Available This paper analyzes machine learning of radial basis function neural network based on Kalman filtering. Three algorithms are derived: linearized Kalman filter, linearized information filter and unscented Kalman filter. We emphasize basic properties of these estimation algorithms, demonstrate how their advantages can be used for optimization of network parameters, derive mathematical models and show how they can be applied to model problems in engineering practice.

  2. Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network.

    Science.gov (United States)

    Del Papa, Bruno; Priesemann, Viola; Triesch, Jochen

    2017-01-01

    Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions - matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model's performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN's spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.

  3. DeepNet: An Ultrafast Neural Learning Code for Seismic Imaging

    International Nuclear Information System (INIS)

    Barhen, J.; Protopopescu, V.; Reister, D.

    1999-01-01

    A feed-forward multilayer neural net is trained to learn the correspondence between seismic data and well logs. The introduction of a virtual input layer, connected to the nominal input layer through a special nonlinear transfer function, enables ultrafast (single iteration), near-optimal training of the net using numerical algebraic techniques. A unique computer code, named DeepNet, has been developed, that has achieved, in actual field demonstrations, results unattainable to date with industry standard tools

  4. A Comparative Classification of Wheat Grains for Artificial Neural Network and Extreme Learning Machine

    OpenAIRE

    ASLAN, Muhammet Fatih; SABANCI, Kadir; YİĞİT, Enes; KAYABAŞI, Ahmet; TOKTAŞ, Abdurrahim; DUYSAK, Hüseyin

    2018-01-01

    In this study, classification of two types of wheat grainsinto bread and durum was carried out. The species of wheat grains in thisdataset are bread and durum and these species have equal samples in the datasetas 100 instances. Seven features, including width, height, area, perimeter,roundness, width and perimeter/area were extracted from each wheat grains. Classificationwas separately conducted by Artificial Neural Network (ANN) and Extreme Learning Machine (ELM)artificial intelligence techn...

  5. Identification of chaotic systems by neural network with hybrid learning algorithm

    International Nuclear Information System (INIS)

    Pan, S.-T.; Lai, C.-C.

    2008-01-01

    Based on the genetic algorithm (GA) and steepest descent method (SDM), this paper proposes a hybrid algorithm for the learning of neural networks to identify chaotic systems. The systems in question are the logistic map and the Duffing equation. Different identification schemes are used to identify both the logistic map and the Duffing equation, respectively. Simulation results show that our hybrid algorithm is more efficient than that of other methods

  6. Neural mechanisms of human perceptual learning: electrophysiological evidence for a two-stage process.

    Science.gov (United States)

    Hamamé, Carlos M; Cosmelli, Diego; Henriquez, Rodrigo; Aboitiz, Francisco

    2011-04-26

    Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed. We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30-60 Hz) and alpha (8-14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing. We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.

  7. Statistical Discriminability Estimation for Pattern Classification Based on Neural Incremental Attribute Learning

    DEFF Research Database (Denmark)

    Wang, Ting; Guan, Sheng-Uei; Puthusserypady, Sadasivan

    2014-01-01

    Feature ordering is a significant data preprocessing method in Incremental Attribute Learning (IAL), a novel machine learning approach which gradually trains features according to a given order. Previous research has shown that, similar to feature selection, feature ordering is also important based...... estimation. Moreover, a criterion that summarizes all the produced values of AD is employed with a GA (Genetic Algorithm)-based approach to obtain the optimum feature ordering for classification problems based on neural networks by means of IAL. Compared with the feature ordering obtained by other approaches...

  8. HIERtalker: A default hierarchy of high order neural networks that learns to read English aloud

    Energy Technology Data Exchange (ETDEWEB)

    An, Z.G.; Mniszewski, S.M.; Lee, Y.C.; Papcun, G.; Doolen, G.D.

    1988-01-01

    A new learning algorithm based on a default hierarchy of high order neural networks has been developed that is able to generalize as well as handle exceptions. It learns the ''building blocks'' or clusters of symbols in a stream that appear repeatedly and convey certain messages. The default hierarchy prevents a combinatoric explosion of rules. A simulator of such a hierarchy, HIERtalker, has been applied to the conversion of English words to phonemes. Achieved accuracy is 99% for trained words and ranges from 76% to 96% for sets of new words. 8 refs., 4 figs., 1 tab.

  9. A Model to Explain the Emergence of Reward Expectancy neurons using Reinforcement Learning and Neural Network

    OpenAIRE

    Shinya, Ishii; Munetaka, Shidara; Katsunari, Shibata

    2006-01-01

    In an experiment of multi-trial task to obtain a reward, reward expectancy neurons,###which responded only in the non-reward trials that are necessary to advance###toward the reward, have been observed in the anterior cingulate cortex of monkeys.###In this paper, to explain the emergence of the reward expectancy neuron in###terms of reinforcement learning theory, a model that consists of a recurrent neural###network trained based on reinforcement learning is proposed. The analysis of the###hi...

  10. Research of Dynamic Competitive Learning in Neural Networks

    Institute of Scientific and Technical Information of China (English)

    PAN Hao; CEN Li; ZHONG Luo

    2005-01-01

    Introduce a method of generation of new units within a cluster and a algorithm of generating new clusters.The model automatically builds up its dynamically growing internal representation structure during the learning process.Comparing model with other typical classification algorithm such as the Kohonen's self-organizing map, the model realizes a multilevel classification of the input pattern with an op tional accuracy and gives a strong support possibility for the parallel computational main processor. The idea is suitable for the high level storage of complex datas struetures for object recognition.

  11. Identification of neural connectivity signatures of autism using machine learning

    Directory of Open Access Journals (Sweden)

    Gopikrishna eDeshpande

    2013-10-01

    Full Text Available Alterations in neural connectivity have been suggested as a signature of the pathobiology of autism. Although disrupted correlation between cortical regions observed from functional MRI is considered to be an explanatory model for autism, the directional causal influence between brain regions is a vital link missing in these studies. The current study focuses on addressing this in an fMRI study of Theory-of-Mind in 15 high-functioning adolescents and adults with autism (ASD and 15 typically developing (TD controls. Participants viewed a series of comic strip vignettes in the MRI scanner and were asked to choose the most logical end to the story from three alternatives, separately for trials involving physical and intentional causality. Causal brain connectivity obtained from a multivariate autoregressive model, along with assessment scores, functional connectivity values, and fractional anisotropy obtained from DTI data for each participant, were submitted to a recursive cluster elimination based support vector machine classifier to determine the accuracy with which the classifier can predict a novel participant’s group membership (ASD or TD. We found a maximum classification accuracy of 95.9 % with 19 features which had the highest discriminative ability between the groups. All of the 19 features were effective connectivity paths, indicating that causal information may be critical in discriminating between ASD and TD groups. These effective connectivity paths were also found to be significantly greater in controls as compared to ASD participants and consisted predominantly of outputs from the fusiform face area and middle temporal gyrus indicating impaired connectivity in ASD participants, particularly in the social brain areas. These findings collectively point towards the fact that alterations in causal brain connectivity in individuals with ASD could serve as a potential non-invasive neuroimaging signature for autism

  12. Learning and retrieval behavior in recurrent neural networks with pre-synaptic dependent homeostatic plasticity

    Science.gov (United States)

    Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.

    2017-08-01

    The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.

  13. Modeling the behavioral substrates of associate learning and memory - Adaptive neural models

    Science.gov (United States)

    Lee, Chuen-Chien

    1991-01-01

    Three adaptive single-neuron models based on neural analogies of behavior modification episodes are proposed, which attempt to bridge the gap between psychology and neurophysiology. The proposed models capture the predictive nature of Pavlovian conditioning, which is essential to the theory of adaptive/learning systems. The models learn to anticipate the occurrence of a conditioned response before the presence of a reinforcing stimulus when training is complete. Furthermore, each model can find the most nonredundant and earliest predictor of reinforcement. The behavior of the models accounts for several aspects of basic animal learning phenomena in Pavlovian conditioning beyond previous related models. Computer simulations show how well the models fit empirical data from various animal learning paradigms.

  14. The novel steroidal alkaloids dendrogenin A and B promote proliferation of adult neural stem cells

    International Nuclear Information System (INIS)

    Khalifa, Shaden A.M.; Medina, Philippe de; Erlandsson, Anna; El-Seedi, Hesham R.; Silvente-Poirot, Sandrine; Poirot, Marc

    2014-01-01

    Highlights: • Dendrogenin A and B are new aminoalkyl oxysterols. • Dendrogenins stimulated neural stem cells proliferation. • Dendrogenins induce neuronal outgrowth from neurospheres. • Dendrogenins provide new therapeutic options for neurodegenerative disorders. - Abstract: Dendrogenin A (DDA) and dendrogenin B (DDB) are new aminoalkyl oxysterols which display re-differentiation of tumor cells of neuronal origin at nanomolar concentrations. We analyzed the influence of dendrogenins on adult mice neural stem cell proliferation, sphere formation and differentiation. DDA and DDB were found to have potent proliferative effects in neural stem cells. Additionally, they induce neuronal outgrowth from neurospheres during in vitro cultivation. Taken together, our results demonstrate a novel role for dendrogenins A and B in neural stem cell proliferation and differentiation which further increases their likely importance to compensate for neuronal cell loss in the brain

  15. The novel steroidal alkaloids dendrogenin A and B promote proliferation of adult neural stem cells

    Energy Technology Data Exchange (ETDEWEB)

    Khalifa, Shaden A.M., E-mail: shaden.khalifa@ki.se [Department of Neuroscience, Karolinska Institute, Stockholm (Sweden); Medina, Philippe de [Affichem, Toulouse (France); INSERM UMR 1037, Team “Sterol Metabolism and Therapeutic Innovations in Oncology”, Cancer Research Center of Toulouse, F-31052 Toulouse (France); Erlandsson, Anna [Department of Public Health and Caring Sciences, Uppsala University, Uppsala (Sweden); El-Seedi, Hesham R. [Department of Medicinal Chemistry, Biomedical Centre, Uppsala University, Uppsala (Sweden); Silvente-Poirot, Sandrine [INSERM UMR 1037, Team “Sterol Metabolism and Therapeutic Innovations in Oncology”, Cancer Research Center of Toulouse, F-31052 Toulouse (France); University of Toulouse III, Toulouse (France); Institut Claudius Regaud, Toulouse (France); Poirot, Marc, E-mail: marc.poirot@inserm.fr [INSERM UMR 1037, Team “Sterol Metabolism and Therapeutic Innovations in Oncology”, Cancer Research Center of Toulouse, F-31052 Toulouse (France); University of Toulouse III, Toulouse (France); Institut Claudius Regaud, Toulouse (France)

    2014-04-11

    Highlights: • Dendrogenin A and B are new aminoalkyl oxysterols. • Dendrogenins stimulated neural stem cells proliferation. • Dendrogenins induce neuronal outgrowth from neurospheres. • Dendrogenins provide new therapeutic options for neurodegenerative disorders. - Abstract: Dendrogenin A (DDA) and dendrogenin B (DDB) are new aminoalkyl oxysterols which display re-differentiation of tumor cells of neuronal origin at nanomolar concentrations. We analyzed the influence of dendrogenins on adult mice neural stem cell proliferation, sphere formation and differentiation. DDA and DDB were found to have potent proliferative effects in neural stem cells. Additionally, they induce neuronal outgrowth from neurospheres during in vitro cultivation. Taken together, our results demonstrate a novel role for dendrogenins A and B in neural stem cell proliferation and differentiation which further increases their likely importance to compensate for neuronal cell loss in the brain.

  16. Unsupervised learning in neural networks with short range synapses

    Science.gov (United States)

    Brunnet, L. G.; Agnes, E. J.; Mizusaki, B. E. P.; Erichsen, R., Jr.

    2013-01-01

    Different areas of the brain are involved in specific aspects of the information being processed both in learning and in memory formation. For example, the hippocampus is important in the consolidation of information from short-term memory to long-term memory, while emotional memory seems to be dealt by the amygdala. On the microscopic scale the underlying structures in these areas differ in the kind of neurons involved, in their connectivity, or in their clustering degree but, at this level, learning and memory are attributed to neuronal synapses mediated by longterm potentiation and long-term depression. In this work we explore the properties of a short range synaptic connection network, a nearest neighbor lattice composed mostly by excitatory neurons and a fraction of inhibitory ones. The mechanism of synaptic modification responsible for the emergence of memory is Spike-Timing-Dependent Plasticity (STDP), a Hebbian-like rule, where potentiation/depression is acquired when causal/non-causal spikes happen in a synapse involving two neurons. The system is intended to store and recognize memories associated to spatial external inputs presented as simple geometrical forms. The synaptic modifications are continuously applied to excitatory connections, including a homeostasis rule and STDP. In this work we explore the different scenarios under which a network with short range connections can accomplish the task of storing and recognizing simple connected patterns.

  17. Application of different entropy formalisms in a neural network for novel word learning

    Science.gov (United States)

    Khordad, R.; Rastegar Sedehi, H. R.

    2015-12-01

    In this paper novel word learning in adults is studied. For this goal, four entropy formalisms are employed to include some degree of non-locality in a neural network. The entropy formalisms are Tsallis, Landsberg-Vedral, Kaniadakis, and Abe entropies. First, we have analytically obtained non-extensive cost functions for the all entropies. Then, we have used a generalization of the gradient descent dynamics as a learning rule in a simple perceptron. The Langevin equations are numerically solved and the error function (learning curve) is obtained versus time for different values of the parameters. The influence of index q and number of neuron N on learning is investigated for the all entropies. It is found that learning is a decreasing function of time for the all entropies. The rate of learning for the Landsberg-Vedral entropy is slower than other entropies. The variation of learning with time for the Landsberg-Vedral entropy is not appreciable when the number of neurons increases. It is said that entropy formalism can be used as a means for studying the learning.

  18. Neural oscillatory mechanisms during novel grammar learning underlying language analytical abilities.

    Science.gov (United States)

    Kepinska, Olga; Pereda, Ernesto; Caspers, Johanneke; Schiller, Niels O

    2017-12-01

    The goal of the present study was to investigate the initial phases of novel grammar learning on a neural level, concentrating on mechanisms responsible for individual variability between learners. Two groups of participants, one with high and one with average language analytical abilities, performed an Artificial Grammar Learning (AGL) task consisting of learning and test phases. During the task, EEG signals from 32 cap-mounted electrodes were recorded and epochs corresponding to the learning phases were analysed. We investigated spectral power modulations over time, and functional connectivity patterns by means of a bivariate, frequency-specific index of phase synchronization termed Phase Locking Value (PLV). Behavioural data showed learning effects in both groups, with a steeper learning curve and higher ultimate attainment for the highly skilled learners. Moreover, we established that cortical connectivity patterns and profiles of spectral power modulations over time differentiated L2 learners with various levels of language analytical abilities. Over the course of the task, the learning process seemed to be driven by whole-brain functional connectivity between neuronal assemblies achieved by means of communication in the beta band frequency. On a shorter time-scale, increasing proficiency on the AGL task appeared to be supported by stronger local synchronisation within the right hemisphere regions. Finally, we observed that the highly skilled learners might have exerted less mental effort, or reduced attention for the task at hand once the learning was achieved, as evidenced by the higher alpha band power. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Electroacupuncture in the repair of spinal cord injury: inhibiting the Notch signaling pathway and promoting neural stem cell proliferation

    Directory of Open Access Journals (Sweden)

    Xin Geng

    2015-01-01

    Full Text Available Electroacupuncture for the treatment of spinal cord injury has a good clinical curative effect, but the underlying mechanism is unclear. In our experiments, the spinal cord of adult Sprague-Dawley rats was clamped for 60 seconds. Dazhui (GV14 and Mingmen (GV4 acupoints of rats were subjected to electroacupuncture. Enzyme-linked immunosorbent assay revealed that the expression of serum inflammatory factors was apparently downregulated in rat models of spinal cord injury after electroacupuncture. Hematoxylin-eosin staining and immunohistochemistry results demonstrated that electroacupuncture contributed to the proliferation of neural stem cells in rat injured spinal cord, and suppressed their differentiation into astrocytes. Real-time quantitative PCR and western blot assays showed that electroacupuncture inhibited activation of the Notch signaling pathway induced by spinal cord injury. These findings indicate that electroacupuncture repaired the injured spinal cord by suppressing the Notch signaling pathway and promoting the proliferation of endogenous neural stem cells.

  20. Neural Control of a Tracking Task via Attention-Gated Reinforcement Learning for Brain-Machine Interfaces.

    Science.gov (United States)

    Wang, Yiwen; Wang, Fang; Xu, Kai; Zhang, Qiaosheng; Zhang, Shaomin; Zheng, Xiaoxiang

    2015-05-01

    Reinforcement learning (RL)-based brain machine interfaces (BMIs) enable the user to learn from the environment through interactions to complete the task without desired signals, which is promising for clinical applications. Previous studies exploited Q-learning techniques to discriminate neural states into simple directional actions providing the trial initial timing. However, the movements in BMI applications can be quite complicated, and the action timing explicitly shows the intention when to move. The rich actions and the corresponding neural states form a large state-action space, imposing generalization difficulty on Q-learning. In this paper, we propose to adopt attention-gated reinforcement learning (AGREL) as a new learning scheme for BMIs to adaptively decode high-dimensional neural activities into seven distinct movements (directional moves, holdings and resting) due to the efficient weight-updating. We apply AGREL on neural data recorded from M1 of a monkey to directly predict a seven-action set in a time sequence to reconstruct the trajectory of a center-out task. Compared to Q-learning techniques, AGREL could improve the target acquisition rate to 90.16% in average with faster convergence and more stability to follow neural activity over multiple days, indicating the potential to achieve better online decoding performance for more complicated BMI tasks.

  1. The Culture of Learning Continuum: Promoting Internal Values in Higher Education

    Science.gov (United States)

    Sagy, Ornit; Kali, Yael; Tsaushu, Masha; Tal, Tali

    2018-01-01

    This study endeavors to identify ways to promote a productive learning culture in higher education. Specifically, we sought to encourage development of internal values in students' culture of learning and examine how this can promote their understanding of scientific content. Set in a high enrollment undergraduate biology course, we designed a…

  2. A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language.

    Directory of Open Access Journals (Sweden)

    Bruno Golosio

    Full Text Available Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.

  3. Teaching strategies to promote concept learning by design challenges

    Science.gov (United States)

    Van Breukelen, Dave; Van Meel, Adrianus; De Vries, Marc

    2017-07-01

    Background: This study is the second study of a design-based research, organised around four studies, that aims to improve student learning, teaching skills and teacher training concerning the design-based learning approach called Learning by Design (LBD).

  4. Problem-Based Learning: Exploiting Knowledge of How People Learn to Promote Effective Learning

    Science.gov (United States)

    Wood, E. J.

    2004-01-01

    There is much information from educational psychology studies on how people learn. The thesis of this paper is that we should use this information to guide the ways in which we teach rather than blindly using our traditional methods. In this context, problem-based learning (PBL), as a method of teaching widely used in medical schools but…

  5. Using Deep Learning Neural Networks To Find Best Performing Audience Segments

    Directory of Open Access Journals (Sweden)

    Anup Badhe

    2015-08-01

    Full Text Available Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning neural networks have been used in machine learning to use multiple processing layers to interpret large datasets with multiple dimensions to come up with a high-level characterization of the data. During a request for an advertisement and subsequently serving of the advertisement on the mobile device there are many trackers that are fired collecting a lot of data points. If the user likes the advertisement and clicks on it another set of trackers give additional information resulting from the click. This information is aggregated by the ad server and shown in its reporting console. The same information can form the basis of machine learning by feeding this information to a deep learning neural network to come up with audiences that can be targeted based on the product that is advertised.

  6. Individual differences in sensitivity to reward and punishment and neural activity during reward and avoidance learning.

    Science.gov (United States)

    Kim, Sang Hee; Yoon, HeungSik; Kim, Hackjin; Hamann, Stephan

    2015-09-01

    In this functional neuroimaging study, we investigated neural activations during the process of learning to gain monetary rewards and to avoid monetary loss, and how these activations are modulated by individual differences in reward and punishment sensitivity. Healthy young volunteers performed a reinforcement learning task where they chose one of two fractal stimuli associated with monetary gain (reward trials) or avoidance of monetary loss (avoidance trials). Trait sensitivity to reward and punishment was assessed using the behavioral inhibition/activation scales (BIS/BAS). Functional neuroimaging results showed activation of the striatum during the anticipation and reception periods of reward trials. During avoidance trials, activation of the dorsal striatum and prefrontal regions was found. As expected, individual differences in reward sensitivity were positively associated with activation in the left and right ventral striatum during reward reception. Individual differences in sensitivity to punishment were negatively associated with activation in the left dorsal striatum during avoidance anticipation and also with activation in the right lateral orbitofrontal cortex during receiving monetary loss. These results suggest that learning to attain reward and learning to avoid loss are dependent on separable sets of neural regions whose activity is modulated by trait sensitivity to reward or punishment. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. Recurrent fuzzy neural network by using feedback error learning approaches for LFC in interconnected power system

    International Nuclear Information System (INIS)

    Sabahi, Kamel; Teshnehlab, Mohammad; Shoorhedeli, Mahdi Aliyari

    2009-01-01

    In this study, a new adaptive controller based on modified feedback error learning (FEL) approaches is proposed for load frequency control (LFC) problem. The FEL strategy consists of intelligent and conventional controllers in feedforward and feedback paths, respectively. In this strategy, a conventional feedback controller (CFC), i.e. proportional, integral and derivative (PID) controller, is essential to guarantee global asymptotic stability of the overall system; and an intelligent feedforward controller (INFC) is adopted to learn the inverse of the controlled system. Therefore, when the INFC learns the inverse of controlled system, the tracking of reference signal is done properly. Generally, the CFC is designed at nominal operating conditions of the system and, therefore, fails to provide the best control performance as well as global stability over a wide range of changes in the operating conditions of the system. So, in this study a supervised controller (SC), a lookup table based controller, is addressed for tuning of the CFC. During abrupt changes of the power system parameters, the SC adjusts the PID parameters according to these operating conditions. Moreover, for improving the performance of overall system, a recurrent fuzzy neural network (RFNN) is adopted in INFC instead of the conventional neural network, which was used in past studies. The proposed FEL controller has been compared with the conventional feedback error learning controller (CFEL) and the PID controller through some performance indices

  8. Nuclear power plant monitoring using real-time learning neural network

    International Nuclear Information System (INIS)

    Nabeshima, Kunihiko; Tuerkcan, E.; Ciftcioglu, O.

    1994-01-01

    In the present research, artificial neural network (ANN) with real-time adaptive learning is developed for the plant wide monitoring of Borssele Nuclear Power Plant (NPP). Adaptive ANN learning capability is integrated to the monitoring system so that robust and sensitive on-line monitoring is achieved in real-time environment. The major advantages provided by ANN are that system modelling is formed by means of measurement information obtained from a multi-output process system, explicit modelling is not required and the modelling is not restricted to linear systems. Also ANN can respond very fast to anomalous operational conditions. The real-time ANN learning methodology with adaptive real-time monitoring capability is described below for the wide-range and plant-wide data from an operating nuclear power plant. The layered neural network with error backpropagation algorithm for learning has three layers. The network type is auto-associative, inputs and outputs are exactly the same, using 12 plant signals. (author)

  9. Convolutional Neural Network Based on Extreme Learning Machine for Maritime Ships Recognition in Infrared Images.

    Science.gov (United States)

    Khellal, Atmane; Ma, Hongbin; Fei, Qing

    2018-05-09

    The success of Deep Learning models, notably convolutional neural networks (CNNs), makes them the favorable solution for object recognition systems in both visible and infrared domains. However, the lack of training data in the case of maritime ships research leads to poor performance due to the problem of overfitting. In addition, the back-propagation algorithm used to train CNN is very slow and requires tuning many hyperparameters. To overcome these weaknesses, we introduce a new approach fully based on Extreme Learning Machine (ELM) to learn useful CNN features and perform a fast and accurate classification, which is suitable for infrared-based recognition systems. The proposed approach combines an ELM based learning algorithm to train CNN for discriminative features extraction and an ELM based ensemble for classification. The experimental results on VAIS dataset, which is the largest dataset of maritime ships, confirm that the proposed approach outperforms the state-of-the-art models in term of generalization performance and training speed. For instance, the proposed model is up to 950 times faster than the traditional back-propagation based training of convolutional neural networks, primarily for low-level features extraction.

  10. The neural basis of implicit learning and memory: a review of neuropsychological and neuroimaging research.

    Science.gov (United States)

    Reber, Paul J

    2013-08-01

    Memory systems research has typically described the different types of long-term memory in the brain as either declarative versus non-declarative or implicit versus explicit. These descriptions reflect the difference between declarative, conscious, and explicit memory that is dependent on the medial temporal lobe (MTL) memory system, and all other expressions of learning and memory. The other type of memory is generally defined by an absence: either the lack of dependence on the MTL memory system (nondeclarative) or the lack of conscious awareness of the information acquired (implicit). However, definition by absence is inherently underspecified and leaves open questions of how this type of memory operates, its neural basis, and how it differs from explicit, declarative memory. Drawing on a variety of studies of implicit learning that have attempted to identify the neural correlates of implicit learning using functional neuroimaging and neuropsychology, a theory of implicit memory is presented that describes it as a form of general plasticity within processing networks that adaptively improve function via experience. Under this model, implicit memory will not appear as a single, coherent, alternative memory system but will instead be manifested as a principle of improvement from experience based on widespread mechanisms of cortical plasticity. The implications of this characterization for understanding the role of implicit learning in complex cognitive processes and the effects of interactions between types of memory will be discussed for examples within and outside the psychology laboratory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Neural substrates underlying stimulation-enhanced motor skill learning after stroke.

    Science.gov (United States)

    Lefebvre, Stéphanie; Dricot, Laurence; Laloux, Patrice; Gradkowski, Wojciech; Desfontaines, Philippe; Evrard, Frédéric; Peeters, André; Jamart, Jacques; Vandermeeren, Yves

    2015-01-01

    Motor skill learning is one of the key components of motor function recovery after stroke, especially recovery driven by neurorehabilitation. Transcranial direct current stimulation can enhance neurorehabilitation and motor skill learning in stroke patients. However, the neural mechanisms underlying the retention of stimulation-enhanced motor skill learning involving a paretic upper limb have not been resolved. These neural substrates were explored by means of functional magnetic resonance imaging. Nineteen chronic hemiparetic stroke patients participated in a double-blind, cross-over randomized, sham-controlled experiment with two series. Each series consisted of two sessions: (i) an intervention session during which dual transcranial direct current stimulation or sham was applied during motor skill learning with the paretic upper limb; and (ii) an imaging session 1 week later, during which the patients performed the learned motor skill. The motor skill learning task, called the 'circuit game', involves a speed/accuracy trade-off and consists of moving a pointer controlled by a computer mouse along a complex circuit as quickly and accurately as possible. Relative to the sham series, dual transcranial direct current stimulation applied bilaterally over the primary motor cortex during motor skill learning with the paretic upper limb resulted in (i) enhanced online motor skill learning; (ii) enhanced 1-week retention; and (iii) superior transfer of performance improvement to an untrained task. The 1-week retention's enhancement driven by the intervention was associated with a trend towards normalization of the brain activation pattern during performance of the learned motor skill relative to the sham series. A similar trend towards normalization relative to sham was observed during performance of a simple, untrained task without a speed/accuracy constraint, despite a lack of behavioural difference between the dual transcranial direct current stimulation and sham

  12. Multivariate Cross-Classification: Applying machine learning techniques to characterize abstraction in neural representations

    Directory of Open Access Journals (Sweden)

    Jonas eKaplan

    2015-03-01

    Full Text Available Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC, and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application.

  13. Neural correlates of reward-based spatial learning in persons with cocaine dependence.

    Science.gov (United States)

    Tau, Gregory Z; Marsh, Rachel; Wang, Zhishun; Torres-Sanchez, Tania; Graniello, Barbara; Hao, Xuejun; Xu, Dongrong; Packard, Mark G; Duan, Yunsuo; Kangarlu, Alayar; Martinez, Diana; Peterson, Bradley S

    2014-02-01

    Dysfunctional learning systems are thought to be central to the pathogenesis of and impair recovery from addictions. The functioning of the brain circuits for episodic memory or learning that support goal-directed behavior has not been studied previously in persons with cocaine dependence (CD). Thirteen abstinent CD and 13 healthy participants underwent MRI scanning while performing a task that requires the use of spatial cues to navigate a virtual-reality environment and find monetary rewards, allowing the functional assessment of the brain systems for spatial learning, a form of episodic memory. Whereas both groups performed similarly on the reward-based spatial learning task, we identified disturbances in brain regions involved in learning and reward in CD participants. In particular, CD was associated with impaired functioning of medial temporal lobe (MTL), a brain region that is crucial for spatial learning (and episodic memory) with concomitant recruitment of striatum (which normally participates in stimulus-response, or habit, learning), and prefrontal cortex. CD was also associated with enhanced sensitivity of the ventral striatum to unexpected rewards but not to expected rewards earned during spatial learning. We provide evidence that spatial learning in CD is characterized by disturbances in functioning of an MTL-based system for episodic memory and a striatum-based system for stimulus-response learning and reward. We have found additional abnormalities in distributed cortical regions. Consistent with findings from animal studies, we provide the first evidence in humans describing the disruptive effects of cocaine on the coordinated functioning of multiple neural systems for learning and memory.

  14. Neural correlates of testing effects in vocabulary learning.

    Science.gov (United States)

    van den Broek, Gesa S E; Takashima, Atsuko; Segers, Eliane; Fernández, Guillén; Verhoeven, Ludo

    2013-09-01

    Tests that require memory retrieval strongly improve long-term retention in comparison to continued studying. For example, once learners know the translation of a word, restudy practice, during which they see the word and its translation again, is less effective than testing practice, during which they see only the word and retrieve the translation from memory. In the present functional magnetic resonance imaging (fMRI) study, we investigated the neuro-cognitive mechanisms underlying this striking testing effect. Twenty-six young adults without prior knowledge of Swahili learned the translation of 100 Swahili words and then further practiced the words in an fMRI scanner by restudying or by testing. Recall of the translations on a final memory test after one week was significantly better and faster for tested words than for restudied words. Brain regions that were more active during testing than during restudying included the left inferior frontal gyrus, ventral striatum, and midbrain areas. Increased activity in the left inferior parietal and left middle temporal areas during testing but not during restudying predicted better recall on the final memory test. Together, results suggest that testing may be more beneficial than restudying due to processes related to targeted semantic elaboration and selective strengthening of associations between retrieval cues and relevant responses, and may involve increased effortful cognitive control and modulations of memory through striatal motivation and reward circuits. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Promoting autonomous learning in English through the implementation of Content and Language Integrated Learning (CLIL in science and maths subjects

    Directory of Open Access Journals (Sweden)

    Andriani Putu Fika

    2018-01-01

    Full Text Available Autonomous learning is a concept in which the learner has the ability to take charge of their own learning. It becomes a notable aspect that should be perceived by students. The aim of this research is for finding out the strategies used by grade two teachers in Bali Kiddy Primary School to promote autonomous learning in English through the implementation of Content and Language Integrated Learning in science and maths subjects. This study was designed in the form of descriptive qualitative study. The data were collected through observation, interview, and document study. The result of the study shows that there are some strategies of promoting autonomous learning in English through the implementation of CLIL in Science and Maths subjects. Those strategies are table of content training, questioning & presenting, journal writing, choosing activities, and using online activity. Those strategies can be adopted or even adapted as the way to promote autonomous learning in English subject.

  16. Accelerating learning of neural networks with conjugate gradients for nuclear power plant applications

    International Nuclear Information System (INIS)

    Reifman, J.; Vitela, J.E.

    1994-01-01

    The method of conjugate gradients is used to expedite the learning process of feedforward multilayer artificial neural networks and to systematically update both the learning parameter and the momentum parameter at each training cycle. The mechanism for the occurrence of premature saturation of the network nodes observed with the back propagation algorithm is described, suggestions are made to eliminate this undesirable phenomenon, and the reason by which this phenomenon is precluded in the method of conjugate gradients is presented. The proposed method is compared with the standard back propagation algorithm in the training of neural networks to classify transient events in neural power plants simulated by the Midland Nuclear Power Plant Unit 2 simulator. The comparison results indicate that the rate of convergence of the proposed method is much greater than the standard back propagation, that it reduces both the number of training cycles and the CPU time, and that it is less sensitive to the choice of initial weights. The advantages of the method are more noticeable and important for problems where the network architecture consists of a large number of nodes, the training database is large, and a tight convergence criterion is desired

  17. A role for adult TLX-positive neural stem cells in learning and behaviour.

    Science.gov (United States)

    Zhang, Chun-Li; Zou, Yuhua; He, Weimin; Gage, Fred H; Evans, Ronald M

    2008-02-21

    Neurogenesis persists in the adult brain and can be regulated by a plethora of external stimuli, such as learning, memory, exercise, environment and stress. Although newly generated neurons are able to migrate and preferentially incorporate into the neural network, how these cells are molecularly regulated and whether they are required for any normal brain function are unresolved questions. The adult neural stem cell pool is composed of orphan nuclear receptor TLX-positive cells. Here, using genetic approaches in mice, we demonstrate that TLX (also called NR2E1) regulates adult neural stem cell proliferation in a cell-autonomous manner by controlling a defined genetic network implicated in cell proliferation and growth. Consequently, specific removal of TLX from the adult mouse brain through inducible recombination results in a significant reduction of stem cell proliferation and a marked decrement in spatial learning. In contrast, the resulting suppression of adult neurogenesis does not affect contextual fear conditioning, locomotion or diurnal rhythmic activities, indicating a more selective contribution of newly generated neurons to specific cognitive functions.

  18. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    Science.gov (United States)

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-05-01

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Robust sequential learning of feedforward neural networks in the presence of heavy-tailed noise.

    Science.gov (United States)

    Vuković, Najdan; Miljković, Zoran

    2015-03-01

    Feedforward neural networks (FFNN) are among the most used neural networks for modeling of various nonlinear problems in engineering. In sequential and especially real time processing all neural networks models fail when faced with outliers. Outliers are found across a wide range of engineering problems. Recent research results in the field have shown that to avoid overfitting or divergence of the model, new approach is needed especially if FFNN is to run sequentially or in real time. To accommodate limitations of FFNN when training data contains a certain number of outliers, this paper presents new learning algorithm based on improvement of conventional extended Kalman filter (EKF). Extended Kalman filter robust to outliers (EKF-OR) is probabilistic generative model in which measurement noise covariance is not constant; the sequence of noise measurement covariance is modeled as stochastic process over the set of symmetric positive-definite matrices in which prior is modeled as inverse Wishart distribution. In each iteration EKF-OR simultaneously estimates noise estimates and current best estimate of FFNN parameters. Bayesian framework enables one to mathematically derive expressions, while analytical intractability of the Bayes' update step is solved by using structured variational approximation. All mathematical expressions in the paper are derived using the first principles. Extensive experimental study shows that FFNN trained with developed learning algorithm, achieves low prediction error and good generalization quality regardless of outliers' presence in training data. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Learning Control of Fixed-Wing Unmanned Aerial Vehicles Using Fuzzy Neural Networks

    Directory of Open Access Journals (Sweden)

    Erdal Kayacan

    2017-01-01

    Full Text Available A learning control strategy is preferred for the control and guidance of a fixed-wing unmanned aerial vehicle to deal with lack of modeling and flight uncertainties. For learning the plant model as well as changing working conditions online, a fuzzy neural network (FNN is used in parallel with a conventional P (proportional controller. Among the learning algorithms in the literature, a derivative-free one, sliding mode control (SMC theory-based learning algorithm, is preferred as it has been proved to be computationally efficient in real-time applications. Its proven robustness and finite time converging nature make the learning algorithm appropriate for controlling an unmanned aerial vehicle as the computational power is always limited in unmanned aerial vehicles (UAVs. The parameter update rules and stability conditions of the learning are derived, and the proof of the stability of the learning algorithm is shown by using a candidate Lyapunov function. Intensive simulations are performed to illustrate the applicability of the proposed controller which includes the tracking of a three-dimensional trajectory by the UAV subject to time-varying wind conditions. The simulation results show the efficiency of the proposed control algorithm, especially in real-time control systems because of its computational efficiency.

  1. Design and FPGA-implementation of multilayer neural networks with on-chip learning

    International Nuclear Information System (INIS)

    Haggag, S.S.M.Y

    2008-01-01

    Artificial Neural Networks (ANN) is used in many applications in the industry because of their parallel structure, high speed, and their ability to give easy solution to complicated problems. For example identifying the orange and apple in the sorting machine with neural network is easier than using image processing techniques to do the same thing. There are different software for designing, training, and testing the ANN, but in order to use the ANN in the industry, it should be implemented on hardware outside the computer. Neural networks are artificial systems inspired on the brain's cognitive behavior, which can learn tasks with some degree of complexity, such as signal processing, diagnosis, robotics, image processing, and pattern recognition. Many applications demand a high computing power and the traditional software implementation are not sufficient.This thesis presents design and FPGA implementation of Multilayer Neural Networks with On-chip learning in re-configurable hardware. Hardware implementation of neural network algorithm is very interesting due their high performance and they can easily be made parallel. The architecture proposed herein takes advantage of distinct data paths for the forward and backward propagation stages and a pipelined adaptation of the on- line backpropagation algorithm to significantly improve the performance of the learning phase. The architecture is easily scalable and able to cope with arbitrary network sizes with the same hardware. The implementation is targeted diagnosis of the Research Reactor accidents to avoid the risk of occurrence of a nuclear accident. The proposed designed circuits are implemented using Xilinx FPGA Chip XC40150xv and occupied 73% of Chip CLBs. It achieved 10.8 μs to take decision in the forward propagation compared with current software implemented of RPS which take 24 ms. The results show that the proposed architecture leads to significant speed up comparing to high end software solutions. On

  2. Monetary incentives at retrieval promote recognition of involuntarily learned emotional information.

    Science.gov (United States)

    Yan, Chunping; Li, Yunyun; Zhang, Qin; Cui, Lixia

    2018-03-07

    Previous studies have suggested that the effects of reward on memory processes are affected by certain factors, but it remains unclear whether the effects of reward at retrieval on recognition processes are influenced by emotion. The event-related potential was used to investigate the combined effect of reward and emotion on memory retrieval and its neural mechanism. The behavioral results indicated that the reward at retrieval improved recognition performance under positive and negative emotional conditions. The event-related potential results indicated that there were significant interactions between the reward and emotion in the average amplitude during recognition, and the significant reward effects from the frontal to parietal brain areas appeared at 130-800 ms for positive pictures and at 190-800 ms for negative pictures, but there were no significant reward effects of neutral pictures; the reward effect of positive items appeared relatively earlier, starting at 130 ms, and that of negative pictures began at 190 ms. These results indicate that monetary incentives at retrieval promote recognition of involuntarily learned emotional information.

  3. The Neural Circuitry of Expertise: Perceptual Learning and Social Cognition

    Directory of Open Access Journals (Sweden)

    Michael eHarre

    2013-12-01

    Full Text Available Amongst the most significant questions we are confronted with today include the integration of the brain's micro-circuitry, our ability to build the complex social networks that underpin society and how our society impacts on our ecological environment. In trying to unravel these issues one place to begin is at the level of the individual: to consider how we accumulate information about our environment, how this information leads to decisions and how our individual decisions in turn create our social environment. While this is an enormous task, we may already have at hand many of the tools we need. This article is intended to review some of the recent results in neuro-cognitive research and show how they can be extended to two very specific types of expertise: perceptual expertise and social cognition. These two cognitive skills span a vast range of our genetic heritage. Perceptual expertise developed very early in our evolutionary history and is likely a highly developed part of all mammals' cognitive ability. On the other hand social cognition is most highly developed in humans in that we are able to maintain larger and more stable long term social connections with more behaviourally diverse individuals than any other species. To illustrate these ideas I will discuss board games as a toy model of social interactions as they include many of the relevant concepts: perceptual learning, decision-making, long term planning and understanding the mental states of other people. Using techniques that have been developed in mathematical psychology, I show that we can represent some of the key features of expertise using stochastic differential equations. Such models demonstrate how an expert's long exposure to a particular context influences the information they accumulate in order to make a decision.These processes are not confined to board games, we are all experts in our daily lives through long exposure to the many regularities of daily tasks and

  4. Teaching Strategies to Promote Concept Learning by Design Challenges

    Science.gov (United States)

    Van Breukelen, Dave; Van Meel, Adrianus; De Vries, Marc

    2017-01-01

    Background: This study is the second study of a design-based research, organised around four studies, that aims to improve student learning, teaching skills and teacher training concerning the design-based learning approach called Learning by Design (LBD). Purpose: LBD uses the context of design challenges to learn, among other things, science.…

  5. A Neural Network Model to Learn Multiple Tasks under Dynamic Environments

    Science.gov (United States)

    Tsumori, Kenji; Ozawa, Seiichi

    When environments are dynamically changed for agents, the knowledge acquired in an environment might be useless in future. In such dynamic environments, agents should be able to not only acquire new knowledge but also modify old knowledge in learning. However, modifying all knowledge acquired before is not efficient because the knowledge once acquired may be useful again when similar environment reappears and some knowledge can be shared among different environments. To learn efficiently in such environments, we propose a neural network model that consists of the following modules: resource allocating network, long-term & short-term memory, and environment change detector. We evaluate the model under a class of dynamic environments where multiple function approximation tasks are sequentially given. The experimental results demonstrate that the proposed model possesses stable incremental learning, accurate environmental change detection, proper association and recall of old knowledge, and efficient knowledge transfer.

  6. A Tsallis’ statistics based neural network model for novel word learning

    Science.gov (United States)

    Hadzibeganovic, Tarik; Cannas, Sergio A.

    2009-03-01

    We invoke the Tsallis entropy formalism, a nonextensive entropy measure, to include some degree of non-locality in a neural network that is used for simulation of novel word learning in adults. A generalization of the gradient descent dynamics, realized via nonextensive cost functions, is used as a learning rule in a simple perceptron. The model is first investigated for general properties, and then tested against the empirical data, gathered from simple memorization experiments involving two populations of linguistically different subjects. Numerical solutions of the model equations corresponded to the measured performance states of human learners. In particular, we found that the memorization tasks were executed with rather small but population-specific amounts of nonextensivity, quantified by the entropic index q. Our findings raise the possibility of using entropic nonextensivity as a means of characterizing the degree of complexity of learning in both natural and artificial systems.

  7. Deep learning beyond cats and dogs: recent advances in diagnosing breast cancer with deep neural networks.

    Science.gov (United States)

    Burt, Jeremy R; Torosdagli, Neslisah; Khosravan, Naji; RaviPrakash, Harish; Mortazi, Aliasghar; Tissavirasingham, Fiona; Hussein, Sarfaraz; Bagci, Ulas

    2018-04-10

    Deep learning has demonstrated tremendous revolutionary changes in the computing industry and its effects in radiology and imaging sciences have begun to dramatically change screening paradigms. Specifically, these advances have influenced the development of computer-aided detection and diagnosis (CAD) systems. These technologies have long been thought of as "second-opinion" tools for radiologists and clinicians. However, with significant improvements in deep neural networks, the diagnostic capabilities of learning algorithms are approaching levels of human expertise (radiologists, clinicians etc.), shifting the CAD paradigm from a "second opinion" tool to a more collaborative utility. This paper reviews recently developed CAD systems based on deep learning technologies for breast cancer diagnosis, explains their superiorities with respect to previously established systems, defines the methodologies behind the improved achievements including algorithmic developments, and describes remaining challenges in breast cancer screening and diagnosis. We also discuss possible future directions for new CAD models that continue to change as artificial intelligence algorithms evolve.

  8. Learning-Related Changes in Adolescents' Neural Networks during Hypothesis-Generating and Hypothesis-Understanding Training

    Science.gov (United States)

    Lee, Jun-Ki; Kwon, Yongju

    2012-01-01

    Fourteen science high school students participated in this study, which investigated neural-network plasticity associated with hypothesis-generating and hypothesis-understanding in learning. The students were divided into two groups and participated in either hypothesis-generating or hypothesis-understanding type learning programs, which were…

  9. Factors Promoting Vocational Students' Learning at Work: Study on Student Experiences

    Science.gov (United States)

    Virtanen, Anne; Tynjälä, Päivi; Eteläpelto, Anneli

    2014-01-01

    In order to promote effective pedagogical practices for students' work-based learning, we need to understand better how students' learning at work can be supported. This paper examines the factors explaining students' workplace learning (WPL) outcomes, addressing three aspects: (1) student-related individual factors, (2) social and…

  10. Promoting Critical Thinking through Service Learning: A Home-Visiting Case Study

    Science.gov (United States)

    Campbell, Cynthia G.; Oswald, Brianna R.

    2018-01-01

    As stated in APA Learning Outcomes 2 and 3, two central goals of higher education instruction are promoting students' critical thinking skills and connecting student learning to real-life applications. To meet these goals, a community-based service-learning experience was designed using task value, interpersonal accountability, cognitive…

  11. D.E.E.P. Learning: Promoting Informal STEM Learning through a Popular Gaming Platform

    Science.gov (United States)

    Simms, E.; Rohrlick, D.; Layman, C.; Peach, C. L.; Orcutt, J. A.

    2011-12-01

    The research and development of educational games, and the study of the educational value of interactive games in general, have lagged far behind efforts for games created for the purpose of entertainment. But evidence suggests that digital simulations and games have the "potential to advance multiple science learning goals, including motivation to learn science, conceptual understanding, science process skills, understanding of the nature of science, scientific discourse and argumentation, and identification with science and science learning." (NRC, 2011). It is also generally recognized that interactive digital games have the potential to promote the development of valuable learning and life skills, including data processing, decision-making, critical thinking, planning, communication and collaboration (Kirriemuir and MacFarlane, 2006). Video games are now played in 67% of American households (ESA, 2010), and across a broad range of ages, making them a potentially valuable tool for Science, Technology, Engineering and Mathematics (STEM) learning among the diverse audiences associated with informal science education institutions (ISEIs; e.g., aquariums, museums, science centers). We are attempting to capitalize on this potential by developing games based on the popular Microsoft Xbox360 gaming platform and the free Microsoft XNA game development kit. The games, collectively known as Deep-sea Extreme Environment Pilot (D.E.E.P.), engage ISEI visitors in the exploration and understanding of the otherwise remote deep-sea environment. Players assume the role of piloting a remotely-operated vehicle (ROV) to explore ocean observing systems and hydrothermal vent environments, and are challenged to complete science-based objectives in order to earn points under timed conditions. The current games are intended to be relatively brief visitor experiences (on the order of several minutes) that support complementary exhibits and programming, and promote interactive visitor

  12. Behavioural and neural basis of anomalous motor learning in children with autism.

    Science.gov (United States)

    Marko, Mollie K; Crocetti, Deana; Hulst, Thomas; Donchin, Opher; Shadmehr, Reza; Mostofsky, Stewart H

    2015-03-01

    Autism spectrum disorder is a developmental disorder characterized by deficits in social and communication skills and repetitive and stereotyped interests and behaviours. Although not part of the diagnostic criteria, individuals with autism experience a host of motor impairments, potentially due to abnormalities in how they learn motor control throughout development. Here, we used behavioural techniques to quantify motor learning in autism spectrum disorder, and structural brain imaging to investigate the neural basis of that learning in the cerebellum. Twenty children with autism spectrum disorder and 20 typically developing control subjects, aged 8-12, made reaching movements while holding the handle of a robotic manipulandum. In random trials the reach was perturbed, resulting in errors that were sensed through vision and proprioception. The brain learned from these errors and altered the motor commands on the subsequent reach. We measured learning from error as a function of the sensory modality of that error, and found that children with autism spectrum disorder outperformed typically developing children when learning from errors that were sensed through proprioception, but underperformed typically developing children when learning from errors that were sensed through vision. Previous work had shown that this learning depends on the integrity of a region in the anterior cerebellum. Here we found that the anterior cerebellum, extending into lobule VI, and parts of lobule VIII were smaller than normal in children with autism spectrum disorder, with a volume that was predicted by the pattern of learning from visual and proprioceptive errors. We suggest that the abnormal patterns of motor learning in children with autism spectrum disorder, showing an increased sensitivity to proprioceptive error and a decreased sensitivity to visual error, may be associated with abnormalities in the cerebellum. © The Author (2015). Published by Oxford University Press on behalf

  13. Failing to learn from negative prediction errors: Obesity is associated with alterations in a fundamental neural learning mechanism.

    Science.gov (United States)

    Mathar, David; Neumann, Jane; Villringer, Arno; Horstmann, Annette

    2017-10-01

    Prediction errors (PEs) encode the difference between expected and actual action outcomes in the brain via dopaminergic modulation. Integration of these learning signals ensures efficient behavioral adaptation. Obesity has recently been linked to altered dopaminergic fronto-striatal circuits, thus implying impairments in cognitive domains that rely on its integrity. 28 obese and 30 lean human participants performed an implicit stimulus-response learning paradigm inside an fMRI scanner. Computational modeling and psycho-physiological interaction (PPI) analysis was utilized for assessing PE-related learning and associated functional connectivity. We show that human obesity is associated with insufficient incorporation of negative PEs into behavioral adaptation even in a non-food context, suggesting differences in a fundamental neural learning mechanism. Obese subjects were less efficient in using negative PEs to improve implicit learning performance, despite proper coding of PEs in striatum. We further observed lower functional coupling between ventral striatum and supplementary motor area in obese subjects subsequent to negative PEs. Importantly, strength of functional coupling predicted task performance and negative PE utilization. These findings show that obesity is linked to insufficient behavioral adaptation specifically in response to negative PEs, and to associated alterations in function and connectivity within the fronto-striatal system. Recognition of neural differences as a central characteristic of obesity hopefully paves the way to rethink established intervention strategies: Differential behavioral sensitivity to negative and positive PEs should be considered when designing intervention programs. Measures relying on penalization of unwanted behavior may prove less effective in obese subjects than alternative approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A Constrained Multi-Objective Learning Algorithm for Feed-Forward Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    M. Njah

    2017-06-01

    Full Text Available This paper proposes a new approach to address the optimal design of a Feed-forward Neural Network (FNN based classifier. The originality of the proposed methodology, called CMOA, lie in the use of a new constraint handling technique based on a self-adaptive penalty procedure in order to direct the entire search effort towards finding only Pareto optimal solutions that are acceptable. Neurons and connections of the FNN Classifier are dynamically built during the learning process. The approach includes differential evolution to create new individuals and then keeps only the non-dominated ones as the basis for the next generation. The designed FNN Classifier is applied to six binary classification benchmark problems, obtained from the UCI repository, and results indicated the advantages of the proposed approach over other existing multi-objective evolutionary neural networks classifiers reported recently in the literature.

  15. Single-Iteration Learning Algorithm for Feed-Forward Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Cogswell, R.; Protopopescu, V.

    1999-07-31

    A new methodology for neural learning is presented, whereby only a single iteration is required to train a feed-forward network with near-optimal results. To this aim, a virtual input layer is added to the multi-layer architecture. The virtual input layer is connected to the nominal input layer by a specird nonlinear transfer function, and to the fwst hidden layer by regular (linear) synapses. A sequence of alternating direction singular vrdue decompositions is then used to determine precisely the inter-layer synaptic weights. This algorithm exploits the known separability of the linear (inter-layer propagation) and nonlinear (neuron activation) aspects of information &ansfer within a neural network.

  16. A new backpropagation learning algorithm for layered neural networks with nondifferentiable units.

    Science.gov (United States)

    Oohori, Takahumi; Naganuma, Hidenori; Watanabe, Kazuhisa

    2007-05-01

    We propose a digital version of the backpropagation algorithm (DBP) for three-layered neural networks with nondifferentiable binary units. This approach feeds teacher signals to both the middle and output layers, whereas with a simple perceptron, they are given only to the output layer. The additional teacher signals enable the DBP to update the coupling weights not only between the middle and output layers but also between the input and middle layers. A neural network based on DBP learning is fast and easy to implement in hardware. Simulation results for several linearly nonseparable problems such as XOR demonstrate that the DBP performs favorably when compared to the conventional approaches. Furthermore, in large-scale networks, simulation results indicate that the DBP provides high performance.

  17. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.

    Science.gov (United States)

    Tanabe, Hiroki C; Honda, Manabu; Sadato, Norihiro

    2005-07-06

    To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

  18. Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks

    Science.gov (United States)

    Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.

    2015-01-01

    The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies

  19. Test-enhanced learning: the potential for testing to promote greater learning in undergraduate science courses.

    Science.gov (United States)

    Brame, Cynthia J; Biel, Rachel

    2015-01-01

    Testing within the science classroom is commonly used for both formative and summative assessment purposes to let the student and the instructor gauge progress toward learning goals. Research within cognitive science suggests, however, that testing can also be a learning event. We present summaries of studies that suggest that repeated retrieval can enhance long-term learning in a laboratory setting; various testing formats can promote learning; feedback enhances the benefits of testing; testing can potentiate further study; and benefits of testing are not limited to rote memory. Most of these studies were performed in a laboratory environment, so we also present summaries of experiments suggesting that the benefits of testing can extend to the classroom. Finally, we suggest opportunities that these observations raise for the classroom and for further research. © 2015 C. J. Brame and R. Biel. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  20. Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas.

    Science.gov (United States)

    Chang, P; Grinband, J; Weinberg, B D; Bardis, M; Khy, M; Cadena, G; Su, M-Y; Cha, S; Filippi, C G; Bota, D; Baldi, P; Poisson, L M; Jain, R; Chow, D

    2018-05-10

    The World Health Organization has recently placed new emphasis on the integration of genetic information for gliomas. While tissue sampling remains the criterion standard, noninvasive imaging techniques may provide complimentary insight into clinically relevant genetic mutations. Our aim was to train a convolutional neural network to independently predict underlying molecular genetic mutation status in gliomas with high accuracy and identify the most predictive imaging features for each mutation. MR imaging data and molecular information were retrospectively obtained from The Cancer Imaging Archives for 259 patients with either low- or high-grade gliomas. A convolutional neural network was trained to classify isocitrate dehydrogenase 1 ( IDH1 ) mutation status, 1p/19q codeletion, and O6-methylguanine-DNA methyltransferase ( MGMT ) promotor methylation status. Principal component analysis of the final convolutional neural network layer was used to extract the key imaging features critical for successful classification. Classification had high accuracy: IDH1 mutation status, 94%; 1p/19q codeletion, 92%; and MGMT promotor methylation status, 83%. Each genetic category was also associated with distinctive imaging features such as definition of tumor margins, T1 and FLAIR suppression, extent of edema, extent of necrosis, and textural features. Our results indicate that for The Cancer Imaging Archives dataset, machine-learning approaches allow classification of individual genetic mutations of both low- and high-grade gliomas. We show that relevant MR imaging features acquired from an added dimensionality-reduction technique demonstrate that neural networks are capable of learning key imaging components without prior feature selection or human-directed training. © 2018 by American Journal of Neuroradiology.

  1. Enhanced differentiation of neural stem cells to neurons and promotion of neurite outgrowth by oxygen-glucose deprivation.

    Science.gov (United States)

    Wang, Qin; Yang, Lin; Wang, Yaping

    2015-06-01

    Stroke has become the leading cause of mortality worldwide. Hypoxic or ischemic insults are crucial factors mediating the neural damage in the brain tissue of stroke patients. Neural stem cells (NSCs) have been recognized as a promising tool for the treatment of ischemic stroke and other neurodegenerative diseases due to their inducible pluripotency. In this study, we aim to mimick the cerebral hypoxic-ischemic injury in vitro using oxygen-glucose deprivation (OGD) strategy, and evaluate the effects of OGD on the NSC's neural differentiation, as well as the differentiated neurite outgrowth. Our data showed that NSCs under the short-term 2h OGD treatment are able to maintain cell viability and the capability to form neurospheres. Importantly, this moderate OGD treatment promotes NSC differentiation to neurons and enhances the performance of the mature neuronal networks, accompanying increased neurite outgrowth of differentiated neurons. However, long-term 6h and 8h OGD exposures in NSCs lead to decreased cell survival, reduced differentiation and diminished NSC-derived neurite outgrowth. The expressions of neuron-specific microtubule-associated protein 2 (MAP-2) and growth associated protein 43 (GAP-43) are increased by short-term OGD treatments but suppressed by long-term OGD. Overall, our results demonstrate that short-term OGD exposure in vitro induces differentiation of NSCs while maintaining their proliferation and survival, providing valuable insights of adopting NSC-based therapy for ischemic stroke and other neurodegenerative disorders. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Age-related difference in the effective neural connectivity associated with probabilistic category learning

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Eun Jin; Cho, Sang Soo; Kim, Hee Jung; Bang, Seong Ae; Park, Hyun Soo; Kim, Yu Kyeong; Kim, Sang Eun [Seoul National Univ. College of Medicine, Seoul (Korea, Republic of)

    2007-07-01

    Although it is well known that explicit memory is affected by the deleterious changes in brain with aging, but effect of aging in implicit memory such as probabilistic category learning (PCL) is not clear. To identify the effect of aging on the neural interaction for successful PCL, we investigated the neural substrates of PCL and the age-related changes of the neural network between these brain regions. 23 young (age, 252 y; 11 males) and 14 elderly (673 y; 7 males) healthy subjects underwent FDG PET during a resting state and 150-trial weather prediction (WP) task. Correlations between the WP hit rates and regional glucose metabolism were assessed using SPM2 (P<0.05 uncorrected). For path analysis, seven brain regions (bilateral middle frontal gyri and putamen, left fusiform gyrus, anterior cingulate and right parahippocampal gyri) were selected based on the results of the correlation analysis. Model construction and path analysis processing were done by AMOS 5.0. The elderly had significantly lower total hit rates than the young (P<0.005). In the correlation analysis, both groups showed similar metabolic correlation in frontal and striatal area. But correlation in the medial temporal lobe (MTL) was found differently by group. In path analysis, the functional networks for the constructed model was accepted (X(2) =0.80, P=0.67) and it proved to be significantly different between groups (X{sub diff}(37) = 142.47, P<0.005), Systematic comparisons of each path revealed that frontal crosscallosal and the frontal to parahippocampal connection were most responsible for the model differences (P<0.05). For the successful PCL, the elderly recruits the basal ganglia implicit memory system but MTL recruitment differs from the young. The inadequate MTL correlation pattern in the elderly is may be caused by the changes of the neural pathway related with explicit memory. These neural changes can explain the decreased performance of PCL in elderly subjects.

  3. Promoted neuronal differentiation after activation of alpha4/beta2 nicotinic acetylcholine receptors in undifferentiated neural progenitors.

    Directory of Open Access Journals (Sweden)

    Takeshi Takarada

    Full Text Available BACKGROUND: Neural progenitor is a generic term used for undifferentiated cell populations of neural stem, neuronal progenitor and glial progenitor cells with abilities for proliferation and differentiation. We have shown functional expression of ionotropic N-methyl-D-aspartate (NMDA and gamma-aminobutyrate type-A receptors endowed to positively and negatively regulate subsequent neuronal differentiation in undifferentiated neural progenitors, respectively. In this study, we attempted to evaluate the possible functional expression of nicotinic acetylcholine receptor (nAChR by undifferentiated neural progenitors prepared from neocortex of embryonic rodent brains. METHODOLOGY/PRINCIPAL FINDINGS: Reverse transcription polymerase chain reaction analysis revealed mRNA expression of particular nAChR subunits in undifferentiated rat and mouse progenitors prepared before and after the culture with epidermal growth factor under floating conditions. Sustained exposure to nicotine significantly inhibited the formation of neurospheres composed of clustered proliferating cells and 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide reduction activity at a concentration range of 1 µM to 1 mM without affecting cell survival. In these rodent progenitors previously exposed to nicotine, marked promotion was invariably seen for subsequent differentiation into cells immunoreactive for a neuronal marker protein following the culture of dispersed cells under adherent conditions. Both effects of nicotine were significantly prevented by the heteromeric α4β2 nAChR subtype antagonists dihydro-β-erythroidine and 4-(5-ethoxy-3-pyridinyl-N-methyl-(3E-3-buten-1-amine, but not by the homomeric α7 nAChR subtype antagonist methyllycaconitine, in murine progenitors. Sustained exposure to nicotine preferentially increased the expression of Math1 among different basic helix-loop-helix proneural genes examined. In undifferentiated progenitors from embryonic mice

  4. Adolescent-specific patterns of behavior and neural activity during social reinforcement learning.

    Science.gov (United States)

    Jones, Rebecca M; Somerville, Leah H; Li, Jian; Ruberry, Erika J; Powers, Alisa; Mehta, Natasha; Dyke, Jonathan; Casey, B J

    2014-06-01

    Humans are sophisticated social beings. Social cues from others are exceptionally salient, particularly during adolescence. Understanding how adolescents interpret and learn from variable social signals can provide insight into the observed shift in social sensitivity during this period. The present study tested 120 participants between the ages of 8 and 25 years on a social reinforcement learning task where the probability of receiving positive social feedback was parametrically manipulated. Seventy-eight of these participants completed the task during fMRI scanning. Modeling trial-by-trial learning, children and adults showed higher positive learning rates than did adolescents, suggesting that adolescents demonstrated less differentiation in their reaction times for peers who provided more positive feedback. Forming expectations about receiving positive social reinforcement correlated with neural activity within the medial prefrontal cortex and ventral striatum across age. Adolescents, unlike children and adults, showed greater insular activity during positive prediction error learning and increased activity in the supplementary motor cortex and the putamen when receiving positive social feedback regardless of the expected outcome, suggesting that peer approval may motivate adolescents toward action. While different amounts of positive social reinforcement enhanced learning in children and adults, all positive social reinforcement equally motivated adolescents. Together, these findings indicate that sensitivity to peer approval during adolescence goes beyond simple reinforcement theory accounts and suggest possible explanations for how peers may motivate adolescent behavior.

  5. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Directory of Open Access Journals (Sweden)

    Yoonsik Shim

    2016-10-01

    Full Text Available We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP. The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  6. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Science.gov (United States)

    Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil

    2016-10-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  7. Spiking neural networks for handwritten digit recognition-Supervised learning and network optimization.

    Science.gov (United States)

    Kulkarni, Shruti R; Rajendran, Bipin

    2018-07-01

    We demonstrate supervised learning in Spiking Neural Networks (SNNs) for the problem of handwritten digit recognition using the spike triggered Normalized Approximate Descent (NormAD) algorithm. Our network that employs neurons operating at sparse biological spike rates below 300Hz achieves a classification accuracy of 98.17% on the MNIST test database with four times fewer parameters compared to the state-of-the-art. We present several insights from extensive numerical experiments regarding optimization of learning parameters and network configuration to improve its accuracy. We also describe a number of strategies to optimize the SNN for implementation in memory and energy constrained hardware, including approximations in computing the neuronal dynamics and reduced precision in storing the synaptic weights. Experiments reveal that even with 3-bit synaptic weights, the classification accuracy of the designed SNN does not degrade beyond 1% as compared to the floating-point baseline. Further, the proposed SNN, which is trained based on the precise spike timing information outperforms an equivalent non-spiking artificial neural network (ANN) trained using back propagation, especially at low bit precision. Thus, our study shows the potential for realizing efficient neuromorphic systems that use spike based information encoding and learning for real-world applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Adaptive Learning and Thinking Style to Improve E-Learning Environment Using Neural Network (ALTENN) Model

    OpenAIRE

    Dagez, Hanan Ettaher; Ambarka, Ali Elghali

    2015-01-01

     In recent years we have witnessed an increasingly heightened awareness of the potential benefits of adaptively in e-learning. This has been mainly driven by the realization that the ideal of individualized learning (i.e., learning tailored to the specific requirements and preferences of the individual) cannot be achieved, especially at a “massive” scale, using traditional approaches. In e-learning when the learning style of the student is not compatible with the teaching style of the teacher...

  9. Learning a Transferable Change Rule from a Recurrent Neural Network for Land Cover Change Detection

    Directory of Open Access Journals (Sweden)

    Haobo Lyu

    2016-06-01

    Full Text Available When exploited in remote sensing analysis, a reliable change rule with transfer ability can detect changes accurately and be applied widely. However, in practice, the complexity of land cover changes makes it difficult to use only one change rule or change feature learned from a given multi-temporal dataset to detect any other new target images without applying other learning processes. In this study, we consider the design of an efficient change rule having transferability to detect both binary and multi-class changes. The proposed method relies on an improved Long Short-Term Memory (LSTM model to acquire and record the change information of long-term sequence remote sensing data. In particular, a core memory cell is utilized to learn the change rule from the information concerning binary changes or multi-class changes. Three gates are utilized to control the input, output and update of the LSTM model for optimization. In addition, the learned rule can be applied to detect changes and transfer the change rule from one learned image to another new target multi-temporal image. In this study, binary experiments, transfer experiments and multi-class change experiments are exploited to demonstrate the superiority of our method. Three contributions of this work can be summarized as follows: (1 the proposed method can learn an effective change rule to provide reliable change information for multi-temporal images; (2 the learned change rule has good transferability for detecting changes in new target images without any extra learning process, and the new target images should have a multi-spectral distribution similar to that of the training images; and (3 to the authors’ best knowledge, this is the first time that deep learning in recurrent neural networks is exploited for change detection. In addition, under the framework of the proposed method, changes can be detected under both binary detection and multi-class change detection.

  10. The neural coding of expected and unexpected monetary performance outcomes: dissociations between active and observational learning.

    Science.gov (United States)

    Bellebaum, C; Jokisch, D; Gizewski, E R; Forsting, M; Daum, I

    2012-02-01

    Successful adaptation to the environment requires the learning of stimulus-response-outcome associations. Such associations can be learned actively by trial and error or by observing the behaviour and accompanying outcomes in other persons. The present study investigated similarities and differences in the neural mechanisms of active and observational learning from monetary feedback using functional magnetic resonance imaging. Two groups of 15 subjects each - active and observational learners - participated in the experiment. On every trial, active learners chose between two stimuli and received monetary feedback. Each observational learner observed the choices and outcomes of one active learner. Learning performance as assessed via active test trials without feedback was comparable between groups. Different activation patterns were observed for the processing of unexpected vs. expected monetary feedback in active and observational learners, particularly for positive outcomes. Activity for unexpected vs. expected reward was stronger in the right striatum in active learning, while activity in the hippocampus was bilaterally enhanced in observational and reduced in active learning. Modulation of activity by prediction error (PE) magnitude was observed in the right putamen in both types of learning, whereas PE related activations in the right anterior caudate nucleus and in the medial orbitofrontal cortex were stronger for active learning. The striatum and orbitofrontal cortex thus appear to link reward stimuli to own behavioural reactions and are less strongly involved when the behavioural outcome refers to another person's action. Alternative explanations such as differences in reward value between active and observational learning are also discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Alireza Alemi

    2015-08-01

    Full Text Available Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the

  12. Neural circuitry of abdominal pain-related fear learning and reinstatement in irritable bowel syndrome.

    Science.gov (United States)

    Icenhour, A; Langhorst, J; Benson, S; Schlamann, M; Hampel, S; Engler, H; Forsting, M; Elsenbruch, S

    2015-01-01

    Altered pain anticipation likely contributes to disturbed central pain processing in chronic pain conditions like irritable bowel syndrome (IBS), but the learning processes shaping the expectation of pain remain poorly understood. We assessed the neural circuitry mediating the formation, extinction, and reactivation of abdominal pain-related memories in IBS patients compared to healthy controls (HC) in a differential fear conditioning paradigm. During fear acquisition, predictive visual cues (CS(+)) were paired with rectal distensions (US), while control cues (CS(-)) were presented unpaired. During extinction, only CSs were presented. Subsequently, memory reactivation was assessed with a reinstatement procedure involving unexpected USs. Using functional magnetic resonance imaging, group differences in neural activation to CS(+) vs CS(-) were analyzed, along with skin conductance responses (SCR), CS valence, CS-US contingency, state anxiety, salivary cortisol, and alpha-amylase activity. The contribution of anxiety symptoms was addressed in covariance analyses. Fear acquisition was altered in IBS, as indicated by more accurate contingency awareness, greater CS-related valence change, and enhanced CS(+)-induced differential activation of prefrontal cortex and amygdala. IBS patients further revealed enhanced differential cingulate activation during extinction and greater differential hippocampal activation during reinstatement. Anxiety affected neural responses during memory formation and reinstatement. Abdominal pain-related fear learning and memory processes are altered in IBS, mediated by amygdala, cingulate cortex, prefrontal areas, and hippocampus. Enhanced reinstatement may contribute to hypervigilance and central pain amplification, especially in anxious patients. Preventing a 'relapse' of learned fear utilizing extinction-based interventions may be a promising treatment goal in IBS. © 2014 John Wiley & Sons Ltd.

  13. Evaluating the negative or valuing the positive? Neural mechanisms supporting feedback-based learning across development.

    Science.gov (United States)

    van Duijvenvoorde, Anna C K; Zanolie, Kiki; Rombouts, Serge A R B; Raijmakers, Maartje E J; Crone, Eveline A

    2008-09-17

    How children learn from positive and negative performance feedback lies at the foundation of successful learning and is therefore of great importance for educational practice. In this study, we used functional magnetic resonance imaging (fMRI) to examine the neural developmental changes related to feedback-based learning when performing a rule search and application task. Behavioral results from three age groups (8-9, 11-13, and 18-25 years of age) demonstrated that, compared with adults, 8- to 9-year-old children performed disproportionally more inaccurately after receiving negative feedback relative to positive feedback. Additionally, imaging data pointed toward a qualitative difference in how children and adults use performance feedback. That is, dorsolateral prefrontal cortex and superior parietal cortex were more active after negative feedback for adults, but after positive feedback for children (8-9 years of age). For 11- to 13-year-olds, these regions did not show differential feedback sensitivity, suggesting that the transition occurs around this age. Pre-supplementary motor area/anterior cingulate cortex, in contrast, was more active after negative feedback in both 11- to 13-year-olds and adults, but not 8- to 9-year-olds. Together, the current data show that cognitive control areas are differentially engaged during feedback-based learning across development. Adults engage these regions after signals of response adjustment (i.e., negative feedback). Young children engage these regions after signals of response continuation (i.e., positive feedback). The neural activation patterns found in 11- to 13-year-olds indicate a transition around this age toward an increased influence of negative feedback on performance adjustment. This is the first developmental fMRI study to compare qualitative changes in brain activation during feedback learning across distinct stages of development.

  14. Social Interaction Affects Neural Outcomes of Sign Language Learning As a Foreign Language in Adults.

    Science.gov (United States)

    Yusa, Noriaki; Kim, Jungho; Koizumi, Masatoshi; Sugiura, Motoaki; Kawashima, Ryuta

    2017-01-01

    Children naturally acquire a language in social contexts where they interact with their caregivers. Indeed, research shows that social interaction facilitates lexical and phonological development at the early stages of child language acquisition. It is not clear, however, whether the relationship between social interaction and learning applies to adult second language acquisition of syntactic rules. Does learning second language syntactic rules through social interactions with a native speaker or without such interactions impact behavior and the brain? The current study aims to answer this question. Adult Japanese participants learned a new foreign language, Japanese sign language (JSL), either through a native deaf signer or via DVDs. Neural correlates of acquiring new linguistic knowledge were investigated using functional magnetic resonance imaging (fMRI). The participants in each group were indistinguishable in terms of their behavioral data after the instruction. The fMRI data, however, revealed significant differences in the neural activities between two groups. Significant activations in the left inferior frontal gyrus (IFG) were found for the participants who learned JSL through interactions with the native signer. In contrast, no cortical activation change in the left IFG was found for the group who experienced the same visual input for the same duration via the DVD presentation. Given that the left IFG is involved in the syntactic processing of language, spoken or signed, learning through social interactions resulted in an fMRI signature typical of native speakers: activation of the left IFG. Thus, broadly speaking, availability of communicative interaction is necessary for second language acquisition and this results in observed changes in the brain.

  15. Web Applications That Promote Learning Communities in Today's Online Classrooms

    Science.gov (United States)

    Reigle, Rosemary R.

    2015-01-01

    The changing online learning environment requires that instructors depend less on the standard tools built into most educational learning platforms and turn their focus to use of Open Educational Resources (OERs) and free or low-cost commercial applications. These applications permit new and more efficient ways to build online learning communities…

  16. Radial basis function neural networks with sequential learning MRAN and its applications

    CERN Document Server

    Sundararajan, N; Wei Lu Ying

    1999-01-01

    This book presents in detail the newly developed sequential learning algorithm for radial basis function neural networks, which realizes a minimal network. This algorithm, created by the authors, is referred to as Minimal Resource Allocation Networks (MRAN). The book describes the application of MRAN in different areas, including pattern recognition, time series prediction, system identification, control, communication and signal processing. Benchmark problems from these areas have been studied, and MRAN is compared with other algorithms. In order to make the book self-contained, a review of t

  17. Effect of signal noise on the learning capability of an artificial neural network

    International Nuclear Information System (INIS)

    Vega, J.J.; Reynoso, R.; Calvet, H. Carrillo

    2009-01-01

    Digital Pulse Shape Analysis (DPSA) by artificial neural networks (ANN) is becoming an important tool to extract relevant information from digitized signals in different areas. In this paper, we present a systematic evidence of how the concomitant noise that distorts the signals or patterns to be identified by an ANN set limits to its learning capability. Also, we present evidence that explains overtraining as a competition between the relevant pattern features, on the one side, against the signal noise, on the other side, as the main cause defining the shape of the error surface in weight space and, consequently, determining the steepest descent path that controls the ANN adaptation process.

  18. Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances

    International Nuclear Information System (INIS)

    Fiete, Ila R.; Seung, H. Sebastian

    2006-01-01

    We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of 'empiric' synapses driven by random spike trains from an external source

  19. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    Science.gov (United States)

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  20. Do Hostile School Environments Promote Social Deviance by Shaping Neural Responses to Social Exclusion?

    Science.gov (United States)

    Schriber, Roberta A; Rogers, Christina R; Ferrer, Emilio; Conger, Rand D; Robins, Richard W; Hastings, Paul D; Guyer, Amanda E

    2018-03-01

    The present study examined adolescents' neural responses to social exclusion as a mediator of past exposure to a hostile school environment (HSE) and later social deviance, and whether family connectedness buffered these associations. Participants (166 Mexican-origin adolescents, 54.4% female) reported on their HSE exposure and family connectedness across Grades 9-11. Six months later, neural responses to social exclusion were measured. Finally, social deviance was self-reported in Grades 9 and 12. The HSE-social deviance link was mediated by greater reactivity to social deviance in subgenual anterior cingulate cortex, a region from the social pain network also implicated in social susceptibility. However, youths with stronger family bonds were protected from this neurobiologically mediated path. These findings suggest a complex interplay of risk and protective factors that impact adolescent behavior through the brain. © 2018 Society for Research on Adolescence.

  1. Design Of the Approximation Function of a Pedometer based on Artificial Neural Network for the Healthy Life Style Promotion in Diabetic Patients

    OpenAIRE

    Vega Corona, Antonio; Zárate Banda, Magdalena; Barron Adame, Jose Miguel; Martínez Celorio, René Alfredo; Andina de la Fuente, Diego

    2008-01-01

    The present study describes the design of an Artificial Neural Network to synthesize the Approximation Function of a Pedometer for the Healthy Life Style Promotion. Experimentally, the approximation function is synthesized using three basic digital pedometers of low cost, these pedometers were calibrated with an advanced pedometer that calculates calories consumed and computes distance travelled with personal stride input. The synthesized approximation function by means of the designed neural...

  2. Learning based in projects to promote interdisciplinarity in Secondary School

    Directory of Open Access Journals (Sweden)

    Daniela Boff

    2016-02-01

    Full Text Available The project-based learning is an active learning strategy that helps break the paradigm of traditional teaching methods. The student is involved in the learning proposal that includes the PiBL, on which one is not passive and becomes the main actor in one's own teaching learning process. Within this learning strategy, the teacher becomes a mediator between theory and practice thus each different subject interact with one another in order to develop a topic that is mutual to all areas because the learning environment is naturally interdisciplinary. The idea of this kind of learning strategy was applied during a workshop that took place with primary and secondary schoolteachers in order to help them approach the strategy in the classroom, contributing with experiences and ideas towards the interdisciplinary based project.

  3. Deep learning with convolutional neural networks for EEG decoding and visualization.

    Science.gov (United States)

    Schirrmeister, Robin Tibor; Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio

    2017-11-01

    Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  4. Deep learning with convolutional neural networks for EEG decoding and visualization

    Science.gov (United States)

    Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio

    2017-01-01

    Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017. © 2017 Wiley Periodicals, Inc. PMID:28782865

  5. Global and local missions of cAMP signaling in neural plasticity, learning and memory

    Directory of Open Access Journals (Sweden)

    Daewoo eLee

    2015-08-01

    Full Text Available The fruit fly Drosophila melanogaster has been a popular model to study cAMP signaling and resultant behaviors due to its powerful genetic approaches. All molecular components (AC, PDE, PKA, CREB, etc essential for cAMP signaling have been identified in the fly. Among them, adenylyl cyclase (AC gene rutabaga and phosphodiesterase (PDE gene dunce have been intensively studied to understand the role of cAMP signaling. Interestingly, these two mutant genes were originally identified on the basis of associative learning deficits. This commentary summarizes findings on the role of cAMP in Drosophila neuronal excitability, synaptic plasticity and memory. It mainly focuses on two distinct mechanisms (global versus local regulating excitatory and inhibitory synaptic plasticity related to cAMP homeostasis. This dual regulatory role of cAMP is to increase the strength of excitatory neural circuits on one hand, but to act locally on postsynaptic GABA receptors to decrease inhibitory synaptic plasticity on the other. Thus the action of cAMP could result in a global increase in the neural circuit excitability and memory. Implications of this cAMP signaling related to drug discovery for neural diseases are also described.

  6. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks.

    Science.gov (United States)

    Xia, Peng; Hu, Jie; Peng, Yinghong

    2017-10-25

    A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  7. Statistical learning problem of artificial neural network to control roofing process

    Directory of Open Access Journals (Sweden)

    Lapidus Azariy

    2017-01-01

    Full Text Available Now software developed on the basis of artificial neural networks (ANN has been actively implemented in construction companies to support decision-making in organization and management of construction processes. ANN learning is the main stage of its development. A key question for supervised learning is how many number of training examples we need to approximate the true relationship between network inputs and output with the desired accuracy. Also designing of ANN architecture is related to learning problem known as “curse of dimensionality”. This problem is important for the study of construction process management because of the difficulty to get training data from construction sites. In previous studies the authors have designed a 4-layer feedforward ANN with a unit model of 12-5-4-1 to approximate estimation and prediction of roofing process. This paper presented the statistical learning side of created ANN with simple-error-minimization algorithm. The sample size to efficient training and the confidence interval of network outputs defined. In conclusion the authors predicted successful ANN learning in a large construction business company within a short space of time.

  8. A Meta-Analysis Suggests Different Neural Correlates for Implicit and Explicit Learning.

    Science.gov (United States)

    Loonis, Roman F; Brincat, Scott L; Antzoulatos, Evan G; Miller, Earl K

    2017-10-11

    A meta-analysis of non-human primates performing three different tasks (Object-Match, Category-Match, and Category-Saccade associations) revealed signatures of explicit and implicit learning. Performance improved equally following correct and error trials in the Match (explicit) tasks, but it improved more after correct trials in the Saccade (implicit) task, a signature of explicit versus implicit learning. Likewise, error-related negativity, a marker for error processing, was greater in the Match (explicit) tasks. All tasks showed an increase in alpha/beta (10-30 Hz) synchrony after correct choices. However, only the implicit task showed an increase in theta (3-7 Hz) synchrony after correct choices that decreased with learning. In contrast, in the explicit tasks, alpha/beta synchrony increased with learning and decreased thereafter. Our results suggest that explicit versus implicit learning engages different neural mechanisms that rely on different patterns of oscillatory synchrony. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Neural networks involved in learning lexical-semantic and syntactic information in a second language.

    Science.gov (United States)

    Mueller, Jutta L; Rueschemeyer, Shirley-Ann; Ono, Kentaro; Sugiura, Motoaki; Sadato, Norihiro; Nakamura, Akinori

    2014-01-01

    The present study used functional magnetic resonance imaging (fMRI) to investigate the neural correlates of language acquisition in a realistic learning environment. Japanese native speakers were trained in a miniature version of German prior to fMRI scanning. During scanning they listened to (1) familiar sentences, (2) sentences including a novel sentence structure, and (3) sentences containing a novel word while visual context provided referential information. Learning-related decreases of brain activation over time were found in a mainly left-hemispheric network comprising classical frontal and temporal language areas as well as parietal and subcortical regions and were largely overlapping for novel words and the novel sentence structure in initial stages of learning. Differences occurred at later stages of learning during which content-specific activation patterns in prefrontal, parietal and temporal cortices emerged. The results are taken as evidence for a domain-general network supporting the initial stages of language learning which dynamically adapts as learners become proficient.

  10. The impact of iconic gestures on foreign language word learning and its neural substrate.

    Science.gov (United States)

    Macedonia, Manuela; Müller, Karsten; Friederici, Angela D

    2011-06-01

    Vocabulary acquisition represents a major challenge in foreign language learning. Research has demonstrated that gestures accompanying speech have an impact on memory for verbal information in the speakers' mother tongue and, as recently shown, also in foreign language learning. However, the neural basis of this effect remains unclear. In a within-subjects design, we compared learning of novel words coupled with iconic and meaningless gestures. Iconic gestures helped learners to significantly better retain the verbal material over time. After the training, participants' brain activity was registered by means of fMRI while performing a word recognition task. Brain activations to words learned with iconic and with meaningless gestures were contrasted. We found activity in the premotor cortices for words encoded with iconic gestures. In contrast, words encoded with meaningless gestures elicited a network associated with cognitive control. These findings suggest that memory performance for newly learned words is not driven by the motor component as such, but by the motor image that matches an underlying representation of the word's semantics. Copyright © 2010 Wiley-Liss, Inc.

  11. A method for medulloblastoma tumor differentiation based on convolutional neural networks and transfer learning

    Science.gov (United States)

    Cruz-Roa, Angel; Arévalo, John; Judkins, Alexander; Madabhushi, Anant; González, Fabio

    2015-12-01

    Convolutional neural networks (CNN) have been very successful at addressing different computer vision tasks thanks to their ability to learn image representations directly from large amounts of labeled data. Features learned from a dataset can be used to represent images from a different dataset via an approach called transfer learning. In this paper we apply transfer learning to the challenging task of medulloblastoma tumor differentiation. We compare two different CNN models which were previously trained in two different domains (natural and histopathology images). The first CNN is a state-of-the-art approach in computer vision, a large and deep CNN with 16-layers, Visual Geometry Group (VGG) CNN. The second (IBCa-CNN) is a 2-layer CNN trained for invasive breast cancer tumor classification. Both CNNs are used as visual feature extractors of histopathology image regions of anaplastic and non-anaplastic medulloblastoma tumor from digitized whole-slide images. The features from the two models are used, separately, to train a softmax classifier to discriminate between anaplastic and non-anaplastic medulloblastoma image regions. Experimental results show that the transfer learning approach produce competitive results in comparison with the state of the art approaches for IBCa detection. Results also show that features extracted from the IBCa-CNN have better performance in comparison with features extracted from the VGG-CNN. The former obtains 89.8% while the latter obtains 76.6% in terms of average accuracy.

  12. Wavelet-enhanced convolutional neural network: a new idea in a deep learning paradigm.

    Science.gov (United States)

    Savareh, Behrouz Alizadeh; Emami, Hassan; Hajiabadi, Mohamadreza; Azimi, Seyed Majid; Ghafoori, Mahyar

    2018-05-29

    Manual brain tumor segmentation is a challenging task that requires the use of machine learning techniques. One of the machine learning techniques that has been given much attention is the convolutional neural network (CNN). The performance of the CNN can be enhanced by combining other data analysis tools such as wavelet transform. In this study, one of the famous implementations of CNN, a fully convolutional network (FCN), was used in brain tumor segmentation and its architecture was enhanced by wavelet transform. In this combination, a wavelet transform was used as a complementary and enhancing tool for CNN in brain tumor segmentation. Comparing the performance of basic FCN architecture against the wavelet-enhanced form revealed a remarkable superiority of enhanced architecture in brain tumor segmentation tasks. Using mathematical functions and enhancing tools such as wavelet transform and other mathematical functions can improve the performance of CNN in any image processing task such as segmentation and classification.

  13. Intelligent Image Recognition System for Marine Fouling Using Softmax Transfer Learning and Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    C. S. Chin

    2017-01-01

    Full Text Available The control of biofouling on marine vessels is challenging and costly. Early detection before hull performance is significantly affected is desirable, especially if “grooming” is an option. Here, a system is described to detect marine fouling at an early stage of development. In this study, an image of fouling can be transferred wirelessly via a mobile network for analysis. The proposed system utilizes transfer learning and deep convolutional neural network (CNN to perform image recognition on the fouling image by classifying the detected fouling species and the density of fouling on the surface. Transfer learning using Google’s Inception V3 model with Softmax at last layer was carried out on a fouling database of 10 categories and 1825 images. Experimental results gave acceptable accuracies for fouling detection and recognition.

  14. Histone Deacetylase (HDAC) Inhibitors - emerging roles in neuronal memory, learning, synaptic plasticity and neural regeneration.

    Science.gov (United States)

    Ganai, Shabir Ahmad; Ramadoss, Mahalakshmi; Mahadevan, Vijayalakshmi

    2016-01-01

    Epigenetic regulation of neuronal signalling through histone acetylation dictates transcription programs that govern neuronal memory, plasticity and learning paradigms. Histone Acetyl Transferases (HATs) and Histone Deacetylases (HDACs) are antagonistic enzymes that regulate gene expression through acetylation and deacetylation of histone proteins around which DNA is wrapped inside a eukaryotic cell nucleus. The epigenetic control of HDACs and the cellular imbalance between HATs and HDACs dictate disease states and have been implicated in muscular dystrophy, loss of memory, neurodegeneration and autistic disorders. Altering gene expression profiles through inhibition of HDACs is now emerging as a powerful technique in therapy. This review presents evolving applications of HDAC inhibitors as potential drugs in neurological research and therapy. Mechanisms that govern their expression profiles in neuronal signalling, plasticity and learning will be covered. Promising and exciting possibilities of HDAC inhibitors in memory formation, fear conditioning, ischemic stroke and neural regeneration have been detailed.

  15. Biological oscillations for learning walking coordination: dynamic recurrent neural network functionally models physiological central pattern generator.

    Science.gov (United States)

    Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy

    2013-01-01

    The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.

  16. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images.

    Science.gov (United States)

    Cheng, Phillip M; Malhi, Harshawn S

    2017-04-01

    The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

  17. Optical implementation of neural learning algorithms based on cross-gain modulation in a semiconductor optical amplifier

    Science.gov (United States)

    Li, Qiang; Wang, Zhi; Le, Yansi; Sun, Chonghui; Song, Xiaojia; Wu, Chongqing

    2016-10-01

    Neuromorphic engineering has a wide range of applications in the fields of machine learning, pattern recognition, adaptive control, etc. Photonics, characterized by its high speed, wide bandwidth, low power consumption and massive parallelism, is an ideal way to realize ultrafast spiking neural networks (SNNs). Synaptic plasticity is believed to be critical for learning, memory and development in neural circuits. Experimental results have shown that changes of synapse are highly dependent on the relative timing of pre- and postsynaptic spikes. Synaptic plasticity in which presynaptic spikes preceding postsynaptic spikes results in strengthening, while the opposite timing results in weakening is called antisymmetric spike-timing-dependent plasticity (STDP) learning rule. And synaptic plasticity has the opposite effect under the same conditions is called antisymmetric anti-STDP learning rule. We proposed and experimentally demonstrated an optical implementation of neural learning algorithms, which can achieve both of antisymmetric STDP and anti-STDP learning rule, based on the cross-gain modulation (XGM) within a single semiconductor optical amplifier (SOA). The weight and height of the potentitation and depression window can be controlled by adjusting the injection current of the SOA, to mimic the biological antisymmetric STDP and anti-STDP learning rule more realistically. As the injection current increases, the width of depression and potentitation window decreases and height increases, due to the decreasing of recovery time and increasing of gain under a stronger injection current. Based on the demonstrated optical STDP circuit, ultrafast learning in optical SNNs can be realized.

  18. Mountain Plains Learning Experience Guide: Marketing. Course: Advertising and Promotion.

    Science.gov (United States)

    Egan, B.

    One of thirteen individualized courses included in a marketing curriculum, this course covers the planning and writing of advertisements and organizing sales promotion and public relation activities in wholesale and retail businesses. The course is comprised of two units: (1) Advertising Fundamentals and (2) Promotion. Each unit begins with a Unit…

  19. Promoting Liberal Learning in a Capstone Accounting Course

    Science.gov (United States)

    Ahlawat, Sunita; Miller, Gerald; Shahid, Abdus

    2012-01-01

    This paper describes our efforts to integrate liberal learning principles in a capstone course within the overwhelmingly career-focused discipline of accountancy. Our approach was based on the belief that business and liberal learning courses are complementary, rather than competitive, elements of a well-rounded education. The ability to deal with…

  20. 20 Ways To Promote Brain-Based Teaching and Learning.

    Science.gov (United States)

    Prigge, Debra J.

    2002-01-01

    Based on current knowledge about cognitive processes, this article presents strategies for preparing the learner, managing the environment to motivate students, gaining and keeping learner attention, and increasing memory and recall by making learning personally relevant to students. Resources are listed for brain-based teaching and learning. (CR)

  1. Promoting Residential Renewable Energy via Peer-to-Peer Learning

    Science.gov (United States)

    Heiskanen, Eva; Nissilä, Heli; Tainio, Pasi

    2017-01-01

    Peer-to-peer learning is gaining increasing attention in nonformal community-based environmental education. This article evaluates a novel modification of a concept for peer-to-peer learning about residential energy solutions (Open Homes). We organized collective "Energy Walks" visiting several homes with novel energy solutions and…

  2. Software scaffolds to promote regulation during scientific inquiry learning

    NARCIS (Netherlands)

    Manlove, S.A.; Lazonder, Adrianus W.; de Jong, Anthonius J.M.

    2007-01-01

    This research addresses issues in the design of online scaffolds for regulation within inquiry learning environments. The learning environment in this study included a physics simulation, data analysis tools, and a model editor for students to create runnable models. A regulative support tool called

  3. Ferulic acid promotes survival and differentiation of neural stem cells to prevent gentamicin-induced neuronal hearing loss.

    Science.gov (United States)

    Gu, Lintao; Cui, Xinhua; Wei, Wei; Yang, Jia; Li, Xuezhong

    2017-11-15

    Neural stem cells (NSCs) have exhibited promising potential in therapies against neuronal hearing loss. Ferulic acid (FA) has been widely reported to enhance neurogenic differentiation of different stem cells. We investigated the role of FA in promoting NSC transplant therapy to prevent gentamicin-induced neuronal hearing loss. NSCs were isolated from mouse cochlear tissues to establish in vitro culture, which were then treated with FA. The survival and differentiation of NSCs were evaluated. Subsequently, neurite outgrowth and excitability of the in vitro neuronal network were assessed. Gentamicin was used to induce neuronal hearing loss in mice, in the presence and absence of FA, followed by assessments of auditory brainstem response (ABR) and distortion product optoacoustic emissions (DPOAE) amplitude. FA promoted survival, neurosphere formation and differentiation of NSCs, as well as neurite outgrowth and excitability of in vitro neuronal network. Furthermore, FA restored ABR threshold shifts and DPOAE in gentamicin-induced neuronal hearing loss mouse model in vivo. Our data, for the first time, support potential therapeutic efficacy of FA in promoting survival and differentiation of NSCs to prevent gentamicin-induced neuronal hearing loss. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A neural network-based exploratory learning and motor planning system for co-robots

    Directory of Open Access Journals (Sweden)

    Byron V Galbraith

    2015-07-01

    Full Text Available Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or learning by doing, an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  5. An H(∞) control approach to robust learning of feedforward neural networks.

    Science.gov (United States)

    Jing, Xingjian

    2011-09-01

    A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    Science.gov (United States)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  7. Comparing the neural basis of monetary reward and cognitive feedback during information-integration category learning.

    Science.gov (United States)

    Daniel, Reka; Pollmann, Stefan

    2010-01-06

    The dopaminergic system is known to play a central role in reward-based learning (Schultz, 2006), yet it was also observed to be involved when only cognitive feedback is given (Aron et al., 2004). Within the domain of information-integration category learning, in which information from several stimulus dimensions has to be integrated predecisionally (Ashby and Maddox, 2005), the importance of contingent feedback is well established (Maddox et al., 2003). We examined the common neural correlates of reward anticipation and prediction error in this task. Sixteen subjects performed two parallel information-integration tasks within a single event-related functional magnetic resonance imaging session but received a monetary reward only for one of them. Similar functional areas including basal ganglia structures were activated in both task versions. In contrast, a single structure, the nucleus accumbens, showed higher activation during monetary reward anticipation compared with the anticipation of cognitive feedback in information-integration learning. Additionally, this activation was predicted by measures of intrinsic motivation in the cognitive feedback task and by measures of extrinsic motivation in the rewarded task. Our results indicate that, although all other structures implicated in category learning are not significantly affected by altering the type of reward, the nucleus accumbens responds to the positive incentive properties of an expected reward depending on the specific type of the reward.

  8. A neural network-based exploratory learning and motor planning system for co-robots.

    Science.gov (United States)

    Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  9. D.E.E.P. Learning: Promoting Informal STEM Learning through Ocean Research Simulation Games

    Science.gov (United States)

    Simms, E.; Rohrlick, D.; Layman, C.; Peach, C. L.; Orcutt, J. A.; Keen, C. S.; Matthews, J.; Nsf Ooi-Ci Education; Public Engagement Team

    2010-12-01

    It is generally recognized that interactive digital games have the potential to promote the development of valuable learning and life skills, including data processing, decision-making, critical thinking, planning, communication and collaboration (Kirriemuir and MacFarlane, 2006). But the research and development of educational games, and the study of the educational value of interactive games in general, have lagged far behind the same efforts for games created for the purpose of entertainment. Our group is attempting to capitalize on the facts that games are now played in 67% of American households (ESA, 2010), and across a broad range of ages, by developing effective and engaging simulation games that promote Science, Technology, Engineering and Mathematics (STEM) literacy in informal science education institutions (ISEIs; e.g., aquariums, museums, science centers). In particular, we are developing games based on the popular Microsoft Xbox360 gaming platform and the free Microsoft XNA game development kit, which engage ISEI visitors in the exploration and understanding of the deep-sea environment. Known as Deep-sea Extreme Environment Pilot (D.E.E.P.), the games place players in the role of piloting a remotely-operated vehicle (ROV) to complete science-based objectives associated with the exploration of ocean observing systems and hydrothermal vent environments. In addition to creating a unique educational product, our efforts are intended to identify 1) the key elements of a successful STEM-based simulation game experience in an informal science education institution, and 2) which aspects of game design (e.g., challenge, curiosity, fantasy, personal recognition) are most effective at maximizing both learning and enjoyment. We will share our progress to date, including formative assessment results from testing the game prototypes at Birch Aquarium at Scripps, and discuss the potential benefits and challenges to interactive gaming as a tool to support STEM

  10. Spatiotemporal neural characterization of prediction error valence and surprise during reward learning in humans.

    Science.gov (United States)

    Fouragnan, Elsa; Queirazza, Filippo; Retzler, Chris; Mullinger, Karen J; Philiastides, Marios G

    2017-07-06

    Reward learning depends on accurate reward associations with potential choices. These associations can be attained with reinforcement learning mechanisms using a reward prediction error (RPE) signal (the difference between actual and expected rewards) for updating future reward expectations. Despite an extensive body of literature on the influence of RPE on learning, little has been done to investigate the potentially separate contributions of RPE valence (positive or negative) and surprise (absolute degree of deviation from expectations). Here, we coupled single-trial electroencephalography with simultaneously acquired fMRI, during a probabilistic reversal-learning task, to offer evidence of temporally overlapping but largely distinct spatial representations of RPE valence and surprise. Electrophysiological variability in RPE valence correlated with activity in regions of the human reward network promoting approach or avoidance learning. Electrophysiological variability in RPE surprise correlated primarily with activity in regions of the human attentional network controlling the speed of learning. Crucially, despite the largely separate spatial extend of these representations our EEG-informed fMRI approach uniquely revealed a linear superposition of the two RPE components in a smaller network encompassing visuo-mnemonic and reward areas. Activity in this network was further predictive of stimulus value updating indicating a comparable contribution of both signals to reward learning.

  11. The neural stem cell fate determinant TLX promotes tumorigenesis and genesis of cells resembling glioma stem cells.

    Science.gov (United States)

    Park, Hyo-Jung; Kim, Jun-Kyum; Jeon, Hye-Min; Oh, Se-Yeong; Kim, Sung-Hak; Nam, Do-Hyun; Kim, Hyunggee

    2010-11-01

    A growing body of evidence indicates that deregulation of stem cell fate determinants is a hallmark of many types of malignancies. The neural stem cell fate determinant TLX plays a pivotal role in neurogenesis in the adult brain by maintaining neural stem cells. Here, we report a tumorigenic role of TLX in brain tumor initiation and progression. Increased TLX expression was observed in a number of glioma cells and glioma stem cells, and correlated with poor survival of patients with gliomas. Ectopic expression of TLX in the U87MG glioma cell line and Ink4a/Arf-deficient mouse astrocytes (Ink4a/Arf(-/-) astrocytes) induced cell proliferation with a concomitant increase in cyclin D expression, and accelerated foci formation in soft agar and tumor formation in in vivo transplantation assays. Furthermore, overexpression of TLX in Ink4a/Arf(-/-) astrocytes inhibited cell migration and invasion and promoted neurosphere formation and Nestin expression, which are hallmark characteristics of glioma stem cells, under stem cell culture conditions. Our results indicate that TLX is involved in glioma stem cell genesis and represents a potential therapeutic target for this type of malignancy.

  12. Learning to Promote Health at an Emergency Care Department: Identifying Expansive and Restrictive Conditions

    Science.gov (United States)

    Gustavsson, Maria; Ekberg, Kerstin

    2015-01-01

    This article reports on the findings of a planned workplace health promotion intervention, and the aim is to identify conditions that facilitated or restricted the learning to promote health at an emergency care department in a Swedish hospital. The study had a longitudinal design, with interviews before and after the intervention and follow-up…

  13. Promoting readiness to practice: which learning activities promote competence and professional identity for student social workers during practice learning?

    OpenAIRE

    Roulston, Audrey; Cleak, Helen; Vreugdenhil, Anthea

    2016-01-01

    Practice learning is integral to the curriculum for qualifying social work students. Accreditation standards require regular student supervision and exposure to specific learning activities. Most agencies offer high quality placements but organisational cutbacks may affect supervision and restrict the development of competence and professional identity. Undergraduate social work students in Northern Ireland universities (n = 396) were surveyed about the usefulness of the learning activities t...

  14. The Neural Feedback Response to Error As a Teaching Signal for the Motor Learning System

    Science.gov (United States)

    Shadmehr, Reza

    2016-01-01

    When we experience an error during a movement, we update our motor commands to partially correct for this error on the next trial. How does experience of error produce the improvement in the subsequent motor commands? During the course of an erroneous reaching movement, proprioceptive and visual sensory pathways not only sense the error, but also engage feedback mechanisms, resulting in corrective motor responses that continue until the hand arrives at its goal. One possibility is that this feedback response is co-opted by the learning system and used as a template to improve performance on the next attempt. Here we used electromyography (EMG) to compare neural correlates of learning and feedback to test the hypothesis that the feedback response to error acts as a template for learning. We designed a task in which mixtures of error-clamp and force-field perturbation trials were used to deconstruct EMG time courses into error-feedback and learning components. We observed that the error-feedback response was composed of excitation of some muscles, and inhibition of others, producing a complex activation/deactivation pattern during the reach. Despite this complexity, across muscles the learning response was consistently a scaled version of the error-feedback response, but shifted 125 ms earlier in time. Across people, individuals who produced a greater feedback response to error, also learned more from error. This suggests that the feedback response to error serves as a teaching signal for the brain. Individuals who learn faster have a better teacher in their feedback control system. SIGNIFICANCE STATEMENT Our sensory organs transduce errors in behavior. To improve performance, we must generate better motor commands. How does the nervous system transform an error in sensory coordinates into better motor commands in muscle coordinates? Here we show that when an error occurs during a movement, the reflexes transform the sensory representation of error into motor

  15. Neural changes associated to procedural learning and automatization process in Developmental Coordination Disorder and/or Developmental Dyslexia.

    Science.gov (United States)

    Biotteau, Maëlle; Péran, Patrice; Vayssière, Nathalie; Tallet, Jessica; Albaret, Jean-Michel; Chaix, Yves

    2017-03-01

    Recent theories hypothesize that procedural learning may support the frequent overlap between neurodevelopmental disorders. The neural circuitry supporting procedural learning includes, among others, cortico-cerebellar and cortico-striatal loops. Alteration of these loops may account for the frequent comorbidity between Developmental Coordination Disorder (DCD) and Developmental Dyslexia (DD). The aim of our study was to investigate cerebral changes due to the learning and automatization of a sequence learning task in children with DD, or DCD, or both disorders. fMRI on 48 children (aged 8-12) with DD, DCD or DD + DCD was used to explore their brain activity during procedural tasks, performed either after two weeks of training or in the early stage of learning. Firstly, our results indicate that all children were able to perform the task with the same level of automaticity, but recruit different brain processes to achieve the same performance. Secondly, our fMRI results do not appear to confirm Nicolson and Fawcett's model. The neural correlates recruited for procedural learning by the DD and the comorbid groups are very close, while the DCD group presents distinct characteristics. This provide a promising direction on the neural mechanisms associated with procedural learning in neurodevelopmental disorders and for understanding comorbidity. Published by Elsevier Ltd.

  16. Promoting Constructive Activities that Support Vicarious Learning during Computer-Based Instruction

    Science.gov (United States)

    Gholson, Barry; Craig, Scotty D.

    2006-01-01

    This article explores several ways computer-based instruction can be designed to support constructive activities and promote deep-level comprehension during vicarious learning. Vicarious learning, discussed in the first section, refers to knowledge acquisition under conditions in which the learner is not the addressee and does not physically…

  17. A design-based approach with vocational teachers to promote self-regulated learning

    NARCIS (Netherlands)

    Jossberger, Helen; Brand-Gruwel, Saskia; Van de Wiel, Margje; Boshuizen, Els

    2011-01-01

    Jossberger, H., Brand-Gruwel, S., Van de Wiel, M., & Boshuizen, H. P. A. (2011, August). A design-based approach with vocational teachers to promote self-regulated learning. Presentation at the 14th European Conference for Research on Learning and Instruction (EARLI), Exeter, England.

  18. Promoting Female Students' Learning Motivation towards Science by Exercising Hands-On Activities

    Science.gov (United States)

    Wen-jin, Kuo; Chia-ju, Liu; Shi-an, Leou

    2012-01-01

    The purpose of this study is to design different hands-on science activities and investigate which activities could better promote female students' learning motivation towards science. This study conducted three types of science activities which contains nine hands-on activities, an experience scale and a learning motivation scale for data…

  19. Promoting Collaboration in a Project-Based E-Learning Context

    Science.gov (United States)

    Papanikolaou, Kyparisia; Boubouka, Maria

    2011-01-01

    In this paper we investigate the value of collaboration scripts for promoting metacognitive knowledge in a project-based e-learning context. In an empirical study, 82 students worked individually and in groups on a project using the e-learning environment MyProject, in which the life cycle of a project is inherent. Students followed a particular…

  20. White blood cells identification system based on convolutional deep neural learning networks.

    Science.gov (United States)

    Shahin, A I; Guo, Yanhui; Amin, K M; Sharawi, Amr A

    2017-11-16

    White blood cells (WBCs) differential counting yields valued information about human health and disease. The current developed automated cell morphology equipments perform differential count which is based on blood smear image analysis. Previous identification systems for WBCs consist of successive dependent stages; pre-processing, segmentation, feature extraction, feature selection, and classification. There is a real need to employ deep learning methodologies so that the performance of previous WBCs identification systems can be increased. Classifying small limited datasets through deep learning systems is a major challenge and should be investigated. In this paper, we propose a novel identification system for WBCs based on deep convolutional neural networks. Two methodologies based on transfer learning are followed: transfer learning based on deep activation features and fine-tuning of existed deep networks. Deep acrivation featues are extracted from several pre-trained networks and employed in a traditional identification system. Moreover, a novel end-to-end convolutional deep architecture called "WBCsNet" is proposed and built from scratch. Finally, a limited balanced WBCs dataset classification is performed through the WBCsNet as a pre-trained network. During our experiments, three different public WBCs datasets (2551 images) have been used which contain 5 healthy WBCs types. The overall system accuracy achieved by the proposed WBCsNet is (96.1%) which is more than different transfer learning approaches or even the previous traditional identification system. We also present features visualization for the WBCsNet activation which reflects higher response than the pre-trained activated one. a novel WBCs identification system based on deep learning theory is proposed and a high performance WBCsNet can be employed as a pre-trained network. Copyright © 2017. Published by Elsevier B.V.

  1. Learning how scientists work: experiential research projects to promote cell biology learning and scientific process skills.

    Science.gov (United States)

    DebBurman, Shubhik K

    2002-01-01

    Facilitating not only the mastery of sophisticated subject matter, but also the development of process skills is an ongoing challenge in teaching any introductory undergraduate course. To accomplish this goal in a sophomore-level introductory cell biology course, I require students to work in groups and complete several mock experiential research projects that imitate the professional activities of the scientific community. I designed these projects as a way to promote process skill development within content-rich pedagogy and to connect text-based and laboratory-based learning with the world of contemporary research. First, students become familiar with one primary article from a leading peer-reviewed journal, which they discuss by means of PowerPoint-based journal clubs and journalism reports highlighting public relevance. Second, relying mostly on primary articles, they investigate the molecular basis of a disease, compose reviews for an in-house journal, and present seminars in a public symposium. Last, students author primary articles detailing investigative experiments conducted in the lab. This curriculum has been successful in both quarter-based and semester-based institutions. Student attitudes toward their learning were assessed quantitatively with course surveys. Students consistently reported that these projects significantly lowered barriers to primary literature, improved research-associated skills, strengthened traditional pedagogy, and helped accomplish course objectives. Such approaches are widely suited for instructors seeking to integrate process with content in their courses.

  2. Embodied learning of a generative neural model for biological motion perception and inference.

    Science.gov (United States)

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  3. Improved Neural Signal Classification in a Rapid Serial Visual Presentation Task Using Active Learning.

    Science.gov (United States)

    Marathe, Amar R; Lawhern, Vernon J; Wu, Dongrui; Slayback, David; Lance, Brent J

    2016-03-01

    The application space for brain-computer interface (BCI) technologies is rapidly expanding with improvements in technology. However, most real-time BCIs require extensive individualized calibration prior to use, and systems often have to be recalibrated to account for changes in the neural signals due to a variety of factors including changes in human state, the surrounding environment, and task conditions. Novel approaches to reduce calibration time or effort will dramatically improve the usability of BCI systems. Active Learning (AL) is an iterative semi-supervised learning technique for learning in situations in which data may be abundant, but labels for the data are difficult or expensive to obtain. In this paper, we apply AL to a simulated BCI system for target identification using data from a rapid serial visual presentation (RSVP) paradigm to minimize the amount of training samples needed to initially calibrate a neural classifier. Our results show AL can produce similar overall classification accuracy with significantly less labeled data (in some cases less than 20%) when compared to alternative calibration approaches. In fact, AL classification performance matches performance of 10-fold cross-validation (CV) in over 70% of subjects when training with less than 50% of the data. To our knowledge, this is the first work to demonstrate the use of AL for offline electroencephalography (EEG) calibration in a simulated BCI paradigm. While AL itself is not often amenable for use in real-time systems, this work opens the door to alternative AL-like systems that are more amenable for BCI applications and thus enables future efforts for developing highly adaptive BCI systems.

  4. Embodied Learning of a Generative Neural Model for Biological Motion Perception and Inference

    Directory of Open Access Journals (Sweden)

    Fabian eSchrodt

    2015-07-01

    Full Text Available Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  5. Neurotrophin-3 promotes proliferation and cholinergic neuronal differentiation of bone marrow- derived neural stem cells via notch signaling pathway.

    Science.gov (United States)

    Yan, Yu-Hui; Li, Shao-Heng; Gao, Zhong; Zou, Sa-Feng; Li, Hong-Yan; Tao, Zhen-Yu; Song, Jie; Yang, Jing-Xian

    2016-12-01

    Recently, the potential for neural stem cells (NSCs) to be used in the treatment of Alzheimer's disease (AD) has been reported; however, the therapeutic effects are modest by virtue of the low neural differentiation rate. In our study, we transfected bone marrow-derived NSCs (BM-NSCs) with Neurotrophin-3 (NT-3), a superactive neurotrophic factor that promotes neuronal survival, differentiation, and migration of neuronal cells, to investigate the effects of NT-3 gene overexpression on the proliferation and differentiation into cholinergic neuron of BM-NSCs in vitro and its possible molecular mechanism. BM-NSCs were generated from BM mesenchymal cells of adult C57BL/6 mice and cultured in vitro. After transfected with NT-3 gene, immunofluorescence and RT-PCR method were used to determine the ability of BM-NSCs on proliferation and differentiation into cholinergic neuron; Acetylcholine Assay Kit was used for acetylcholine (Ach). RT-PCR and WB analysis were used to characterize mRNA and protein level related to the Notch signaling pathway. We found that NT-3 can promote the proliferation and differentiation of BM-NSCs into cholinergic neurons and elevate the levels of acetylcholine (ACh) in the supernatant. Furthermore, NT-3 gene overexpression increase the expression of Hes1, decreased the expression of Mash1 and Ngn1 during proliferation of BM-NSCs. Whereas, the expression of Hes1 was down-regulated, and Mash1 and Ngn1 expression were up-regulated during differentiation of BM-NSCs. Our findings support the prospect of using NT-3-transduced BM-NSCs in developing therapies for AD due to their equivalent therapeutic potential as subventricular zone-derived NSCs (SVZ-NSCs), greater accessibility, and autogenous attributes. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. An Attentional Goldilocks Effect: An Optimal Amount of Social Interactivity Promotes Word Learning from Video

    OpenAIRE

    Nussenbaum, Kate; Amso, Dima

    2015-01-01

    Television can be a powerful education tool; however, content-makers must understand the factors that engage attention and promote learning from screen media. Prior research suggests that social engagement is critical for learning and that interactivity may enhance the educational quality of children’s media. The present study examined the effects of increasing the social interactivity of television on children’s visual attention and word learning. Three- to 5-year-old (MAge = 4;5 years, SD =...

  7. Using teacher action research to promote constructivist learning ...

    African Journals Online (AJOL)

    Erna Kinsey

    classroom environment have to change as the curriculum changes. Objectives. 1. To modify ... South African schools in terms of the dimensions assessed by the. CLES. 3. ...... middle school. Learning Environments Research: An International.

  8. Amniotic fluid promotes the appearance of neural retinal progenitors and neurons in human RPE cell cultures.

    Science.gov (United States)

    Davari, Maliheh; Soheili, Zahra-Soheila; Ahmadieh, Hamid; Sanie-Jahromi, Fateme; Ghaderi, Shima; Kanavi, Mozhgan Rezaei; Samiei, Shahram; Akrami, Hassan; Haghighi, Massoud; Javidi-Azad, Fahimeh

    2013-01-01

    Retinal pigment epithelial (RPE) cells are capable of differentiating into retinal neurons when induced by the appropriate growth factors. Amniotic fluid contains a variety of growth factors that are crucial for the development of a fetus. In this study, the effects of human amniotic fluid (HAF) on primary RPE cell cultures were evaluated. RPE cells were isolated from the globes of postnatal human cadavers. The isolated cells were plated and grown in DMEM/F12 with 10% fetal bovine serum. To confirm the RPE identity of the cultured cells, they were immunocytochemically examined for the presence of the RPE cell-specific marker RPE65. RPE cultures obtained from passages 2-7 were treated with HAF and examined morphologically for 1 month. To determine whether retinal neurons or progenitors developed in the treated cultures, specific markers for bipolar (protein kinase C isomer α, PKCα), amacrine (cellular retinoic acid-binding protein I, CRABPI), and neural progenitor (NESTIN) cells were sought, and the amount of mRNA was quantified using real-time PCR. Treating RPE cells with HAF led to a significant decrease in the number of RPE65-positive cells, while PKCα- and CRABPI-positive cells were detected in the cultures. Compared with the fetal bovine serum-treated cultures, the levels of mRNAs quantitatively increased by 2-, 20- and 22-fold for NESTIN, PKCα, and CRABPI, respectively. The RPE cultures treated with HAF established spheres containing both pigmented and nonpigmented cells, which expressed neural progenitor markers such as NESTIN. This study showed that HAF can induce RPE cells to transdifferentiate into retinal neurons and progenitor cells, and that it provides a potential source for cell-based therapies to treat retinal diseases.

  9. Win-stay-lose-learn promotes cooperation in the prisoner's dilemma game with voluntary participation.

    Directory of Open Access Journals (Sweden)

    Chen Chu

    Full Text Available Voluntary participation, demonstrated to be a simple yet effective mechanism to promote persistent cooperative behavior, has been extensively studied. It has also been verified that the aspiration-based win-stay-lose-learn strategy updating rule promotes the evolution of cooperation. Inspired by this well-known fact, we combine the Win-Stay-Lose-Learn updating rule with voluntary participation: Players maintain their strategies when they are satisfied, or players attempt to imitate the strategy of one randomly chosen neighbor. We find that this mechanism maintains persistent cooperative behavior, even further promotes the evolution of cooperation under certain conditions.

  10. Critical-Inquiry-Based-Learning: Model of Learning to Promote Critical Thinking Ability of Pre-service Teachers

    Science.gov (United States)

    Prayogi, S.; Yuanita, L.; Wasis

    2018-01-01

    This study aimed to develop Critical-Inquiry-Based-Learning (CIBL) learning model to promote critical thinking (CT) ability of preservice teachers. The CIBL learning model was developed by meeting the criteria of validity, practicality, and effectiveness. Validation of the model involves 4 expert validators through the mechanism of the focus group discussion (FGD). CIBL learning model declared valid to promote CT ability, with the validity level (Va) of 4.20 and reliability (r) of 90,1% (very reliable). The practicality of the model was evaluated when it was implemented that involving 17 of preservice teachers. The CIBL learning model had been declared practice, its measuring from learning feasibility (LF) with very good criteria (LF-score = 4.75). The effectiveness of the model was evaluated from the improvement CT ability after the implementation of the model. CT ability were evaluated using the scoring technique adapted from Ennis-Weir Critical Thinking Essay Test. The average score of CT ability on pretest is - 1.53 (uncritical criteria), whereas on posttest is 8.76 (critical criteria), with N-gain score of 0.76 (high criteria). Based on the results of this study, it can be concluded that developed CIBL learning model is feasible to promote CT ability of preservice teachers.

  11. Language Learning Enhanced by Massive Multiple Online Role-Playing Games (MMORPGs) and the Underlying Behavioral and Neural Mechanisms

    OpenAIRE

    Zhang, Yongjun; Song, Hongwen; Liu, Xiaoming; Tang, Dinghong; Chen, Yue-e; Zhang, Xiaochu

    2017-01-01

    Massive Multiple Online Role-Playing Games (MMORPGs) have increased in popularity among children, juveniles, and adults since MMORPGs’ appearance in this digital age. MMORPGs can be applied to enhancing language learning, which is drawing researchers’ attention from different fields and many studies have validated MMORPGs’ positive effect on language learning. However, there are few studies on the underlying behavioral or neural mechanism of such effect. This paper reviews the educational app...

  12. Artificial intelligence expert systems with neural network machine learning may assist decision-making for extractions in orthodontic treatment planning.

    Science.gov (United States)

    Takada, Kenji

    2016-09-01

    New approach for the diagnosis of extractions with neural network machine learning. Seok-Ki Jung and Tae-Woo Kim. Am J Orthod Dentofacial Orthop 2016;149:127-33. Not reported. Mathematical modeling. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. A Neural Circuit for Acoustic Navigation combining Heterosynaptic and Non-synaptic Plasticity that learns Stable Trajectories

    DEFF Research Database (Denmark)

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    controllers be resolved in a manner that generates consistent and stable robot trajectories? We propose a neural circuit that minimises this conflict by learning sensorimotor mappings as neuronal transfer functions between the perceived sound direction and wheel velocities of a simulated non-holonomic mobile...

  14. Modeling a Neural Network as a Teaching Tool for the Learning of the Structure-Function Relationship

    Science.gov (United States)

    Salinas, Dino G.; Acevedo, Cristian; Gomez, Christian R.

    2010-01-01

    The authors describe an activity they have created in which students can visualize a theoretical neural network whose states evolve according to a well-known simple law. This activity provided an uncomplicated approach to a paradigm commonly represented through complex mathematical formulation. From their observations, students learned many basic…

  15. Distribution of language-related Cntnap2 protein in neural circuits critical for vocal learning.

    Science.gov (United States)

    Condro, Michael C; White, Stephanie A

    2014-01-01

    Variants of the contactin associated protein-like 2 (Cntnap2) gene are risk factors for language-related disorders including autism spectrum disorder, specific language impairment, and stuttering. Songbirds are useful models for study of human speech disorders due to their shared capacity for vocal learning, which relies on similar cortico-basal ganglia circuitry and genetic factors. Here we investigate Cntnap2 protein expression in the brain of the zebra finch, a songbird species in which males, but not females, learn their courtship songs. We hypothesize that Cntnap2 has overlapping functions in vocal learning species, and expect to find protein expression in song-related areas of the zebra finch brain. We further expect that the distribution of this membrane-bound protein may not completely mirror its mRNA distribution due to the distinct subcellular localization of the two molecular species. We find that Cntnap2 protein is enriched in several song control regions relative to surrounding tissues, particularly within the adult male, but not female, robust nucleus of the arcopallium (RA), a cortical song control region analogous to human layer 5 primary motor cortex. The onset of this sexually dimorphic expression coincides with the onset of sensorimotor learning in developing males. Enrichment in male RA appears due to expression in projection neurons within the nucleus, as well as to additional expression in nerve terminals of cortical projections to RA from the lateral magnocellular nucleus of the nidopallium. Cntnap2 protein expression in zebra finch brain supports the hypothesis that this molecule affects neural connectivity critical for vocal learning across taxonomic classes. Copyright © 2013 Wiley Periodicals, Inc.

  16. Learning Low Dimensional Convolutional Neural Networks for High-Resolution Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Weixun Zhou

    2017-05-01

    Full Text Available Learning powerful feature representations for image retrieval has always been a challenging task in the field of remote sensing. Traditional methods focus on extracting low-level hand-crafted features which are not only time-consuming but also tend to achieve unsatisfactory performance due to the complexity of remote sensing images. In this paper, we investigate how to extract deep feature representations based on convolutional neural networks (CNNs for high-resolution remote sensing image retrieval (HRRSIR. To this end, several effective schemes are proposed to generate powerful feature representations for HRRSIR. In the first scheme, a CNN pre-trained on a different problem is treated as a feature extractor since there are no sufficiently-sized remote sensing datasets to train a CNN from scratch. In the second scheme, we investigate learning features that are specific to our problem by first fine-tuning the pre-trained CNN on a remote sensing dataset and then proposing a novel CNN architecture based on convolutional layers and a three-layer perceptron. The novel CNN has fewer parameters than the pre-trained and fine-tuned CNNs and can learn low dimensional features from limited labelled images. The schemes are evaluated on several challenging, publicly available datasets. The results indicate that the proposed schemes, particularly the novel CNN, achieve state-of-the-art performance.

  17. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks.

    Science.gov (United States)

    Huynh, Benjamin Q; Li, Hui; Giger, Maryellen L

    2016-07-01

    Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text

  18. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    Science.gov (United States)

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  19. Trust as commodity: social value orientation affects the neural substrates of learning to cooperate.

    Science.gov (United States)

    Lambert, Bruno; Declerck, Carolyn H; Emonds, Griet; Boone, Christophe

    2017-04-01

    Individuals differ in their motives and strategies to cooperate in social dilemmas. These differences are reflected by an individual's social value orientation: proselfs are strategic and motivated to maximize self-interest, while prosocials are more trusting and value fairness. We hypothesize that when deciding whether or not to cooperate with a random member of a defined group, proselfs, more than prosocials, adapt their decisions based on past experiences: they 'learn' instrumentally to form a base-line expectation of reciprocity. We conducted an fMRI experiment where participants (19 proselfs and 19 prosocials) played 120 sequential prisoner's dilemmas against randomly selected, anonymous and returning partners who cooperated 60% of the time. Results indicate that cooperation levels increased over time, but that the rate of learning was steeper for proselfs than for prosocials. At the neural level, caudate and precuneus activation were more pronounced for proselfs relative to prosocials, indicating a stronger reliance on instrumental learning and self-referencing to update their trust in the cooperative strategy. © The Author (2017). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. Neural correlates of foreign-language learning in childhood: a 3-year longitudinal ERP study.

    Science.gov (United States)

    Ojima, Shiro; Nakamura, Naoko; Matsuba-Kurita, Hiroko; Hoshino, Takahiro; Hagiwara, Hiroko

    2011-01-01

    A foreign language (a language not spoken in one's community) is difficult to master completely. Early introduction of foreign-language (FL) education during childhood is becoming a standard in many countries. However, the neural process of child FL learning still remains largely unknown. We longitudinally followed 322 school-age children with diverse FL proficiency for three consecutive years, and acquired children's ERP responses to FL words that were semantically congruous or incongruous with the preceding picture context. As FL proficiency increased, various ERP components previously reported in mother-tongue (L1) acquisition (such as a broad negativity, an N400, and a late positive component) appeared sequentially, critically in an identical order to L1 acquisition. This finding was supported not only by cross-sectional analyses of children at different proficiency levels but also by longitudinal analyses of the same children over time. Our data are consistent with the hypothesis that FL learning in childhood reproduces identical developmental stages in an identical order to L1 acquisition, suggesting that the nature of the child's brain itself may determine the normal course of FL learning. Future research should test the generalizability of the results in other aspects of language such as syntax.

  1. DeepX: Deep Learning Accelerator for Restricted Boltzmann Machine Artificial Neural Networks.

    Science.gov (United States)

    Kim, Lok-Won

    2018-05-01

    Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).

  2. Disrupting neural activity related to awake-state sharp wave-ripple complexes prevents hippocampal learning.

    Science.gov (United States)

    Nokia, Miriam S; Mikkonen, Jarno E; Penttonen, Markku; Wikgren, Jan

    2012-01-01

    Oscillations in hippocampal local-field potentials (LFPs) reflect the crucial involvement of the hippocampus in memory trace formation: theta (4-8 Hz) oscillations and ripples (~200 Hz) occurring during sharp waves are thought to mediate encoding and consolidation, respectively. During sharp wave-ripple complexes (SPW-Rs), hippocampal cell firing closely follows the pattern that took place during the initial experience, most likely reflecting replay of that event. Disrupting hippocampal ripples using electrical stimulation either during training in awake animals or during sleep after training retards spatial learning. Here, adult rabbits were trained in trace eyeblink conditioning, a hippocampus-dependent associative learning task. A bright light was presented to the animals during the inter-trial interval (ITI), when awake, either during SPW-Rs or irrespective of their neural state. Learning was particularly poor when the light was presented following SPW-Rs. While the light did not disrupt the ripple itself, it elicited a theta-band oscillation, a state that does not usually coincide with SPW-Rs. Thus, it seems that consolidation depends on neuronal activity within and beyond the hippocampus taking place immediately after, but by no means limited to, hippocampal SPW-Rs.

  3. Feature Selection Methods for Zero-Shot Learning of Neural Activity

    Directory of Open Access Journals (Sweden)

    Carlos A. Caceres

    2017-06-01

    Full Text Available Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.

  4. Some Issues of the Paradigm of Multi-learning Machine - Modular Neural Networks

    DEFF Research Database (Denmark)

    Wang, Pan; Feng, Shuai; Fan, Zhun

    2009-01-01

    This paper addresses some issues on the weighted linear integration of modular neural networks (MNN: a paradigm of hybrid multi-learning machines). First, from the general meaning of variable weights and variable elements synthesis, three basic kinds of integrated models are discussed...... a general form while the corresponding computational algorithms are described briefly. The authors present a new training algorithm of sub-networks named “'Expert in one thing and good at many' (EOGM).” In this algorithm, every sub-network is trained on a primary dataset with some of its near neighbors...... as the accessorial datasets. Simulated results with a kind of dynamic integration methods show the effectiveness of these algorithms, where the performance of the algorithm with EOGM is better than that of the algorithm with a common training method....

  5. Early detection of incipient faults in power plants using accelerated neural network learning

    International Nuclear Information System (INIS)

    Parlos, A.G.; Jayakumar, M.; Atiya, A.

    1992-01-01

    An important aspect of power plant automation is the development of computer systems able to detect and isolate incipient (slowly developing) faults at the earliest possible stages of their occurrence. In this paper, the development and testing of such a fault detection scheme is presented based on recognition of sensor signatures during various failure modes. An accelerated learning algorithm, namely adaptive backpropagation (ABP), has been developed that allows the training of a multilayer perceptron (MLP) network to a high degree of accuracy, with an order of magnitude improvement in convergence speed. An artificial neural network (ANN) has been successfully trained using the ABP algorithm, and it has been extensively tested with simulated data to detect and classify incipient faults of various types and severity and in the presence of varying sensor noise levels

  6. Beam-column joint shear prediction using hybridized deep learning neural network with genetic algorithm

    Science.gov (United States)

    Mundher Yaseen, Zaher; Abdulmohsin Afan, Haitham; Tran, Minh-Tung

    2018-04-01

    Scientifically evidenced that beam-column joints are a critical point in the reinforced concrete (RC) structure under the fluctuation loads effects. In this novel hybrid data-intelligence model developed to predict the joint shear behavior of exterior beam-column structure frame. The hybrid data-intelligence model is called genetic algorithm integrated with deep learning neural network model (GA-DLNN). The genetic algorithm is used as prior modelling phase for the input approximation whereas the DLNN predictive model is used for the prediction phase. To demonstrate this structural problem, experimental data is collected from the literature that defined the dimensional and specimens’ properties. The attained findings evidenced the efficitveness of the hybrid GA-DLNN in modelling beam-column joint shear problem. In addition, the accurate prediction achived with less input variables owing to the feasibility of the evolutionary phase.

  7. A Fusion Face Recognition Approach Based on 7-Layer Deep Learning Neural Network

    Directory of Open Access Journals (Sweden)

    Jianzheng Liu

    2016-01-01

    Full Text Available This paper presents a method for recognizing human faces with facial expression. In the proposed approach, a motion history image (MHI is employed to get the features in an expressive face. The face can be seen as a kind of physiological characteristic of a human and the expressions are behavioral characteristics. We fused the 2D images of a face and MHIs which were generated from the same face’s image sequences with expression. Then the fusion features were used to feed a 7-layer deep learning neural network. The previous 6 layers of the whole network can be seen as an autoencoder network which can reduce the dimension of the fusion features. The last layer of the network can be seen as a softmax regression; we used it to get the identification decision. Experimental results demonstrated that our proposed method performs favorably against several state-of-the-art methods.

  8. Overexpression of miR‑21 promotes neural stem cell proliferation and neural differentiation via the Wnt/β‑catenin signaling pathway in vitro.

    Science.gov (United States)

    Zhang, Wei-Min; Zhang, Zhi-Ren; Yang, Xi-Tao; Zhang, Yong-Gang; Gao, Yan-Sheng

    2018-01-01

    The primary aim of the present study was to examine the effects of microRNA‑21 (miR‑21) on the proliferation and differentiation of rat primary neural stem cells (NSCs) in vitro. miR‑21 was overexpressed in NSCs by transfection with a miR‑21 mimic. The effects of miR‑21 overexpression on NSC proliferation were revealed by Cell Counting kit 8 and 5‑ethynyl‑2'‑deoxyuridine incorporation assay, and miR‑21 overexpression was revealed to increase NSC proliferation. miR‑21 overexpression was confirmed using reverse transcription‑quantitative polymerase chain reaction (RT‑qPCR). mRNA and protein expression levels of key molecules (β‑catenin, cyclin D1, p21 and miR‑21) in the Wnt/β‑catenin signaling pathway were studied by RT‑qPCR and western blot analysis. RT‑qPCR and western blot analyses revealed that miR‑21 overexpression increased β‑catenin and cyclin D1 expression, and decreased p21 expression. These results suggested that miR‑21‑induced increase in proliferation was mediated by activation of the Wnt/β‑catenin signaling pathway, since overexpression of miR‑21 increased β‑catenin and cyclin D1 expression and reduced p21 expression. Furthermore, inhibition of the Wnt/β‑catenin pathway with FH535 attenuated the influence of miR‑21 overexpression on NSC proliferation, indicating that the factors activated by miR‑21 overexpression were inhibited by FH535 treatment. Furthermore, overexpression of miR‑21 enhanced the differentiation of NSCs into neurons and inhibited their differentiation into astrocytes. The present study indicated that in primary rat NSCs, overexpression of miR‑21 may promote proliferation and differentiation into neurons via the Wnt/β‑catenin signaling pathway in vitro.

  9. BET bromodomain inhibition promotes neurogenesis while inhibiting gliogenesis in neural progenitor cells

    Directory of Open Access Journals (Sweden)

    Jingjun Li

    2016-09-01

    Full Text Available Neural stem cells and progenitor cells (NPCs are increasingly appreciated to hold great promise for regenerative medicine to treat CNS injuries and neurodegenerative diseases. However, evidence for effective stimulation of neuronal production from endogenous or transplanted NPCs for neuron replacement with small molecules remains limited. To identify novel chemical entities/targets for neurogenesis, we had established a NPC phenotypic screen assay and validated it using known small-molecule neurogenesis inducers. Through screening small molecule libraries with annotated targets, we identified BET bromodomain inhibition as a novel mechanism for enhancing neurogenesis. BET bromodomain proteins, Brd2, Brd3, and Brd4 were found to be downregulated in NPCs upon differentiation, while their levels remain unaltered in proliferating NPCs. Consistent with the pharmacological study using bromodomain selective inhibitor (+-JQ-1, knockdown of each BET protein resulted in an increase in the number of neurons with simultaneous reduction in both astrocytes and oligodendrocytes. Gene expression profiling analysis demonstrated that BET bromodomain inhibition induced a broad but specific transcription program enhancing directed differentiation of NPCs into neurons while suppressing cell cycle progression and gliogenesis. Together, these results highlight a crucial role of BET proteins as epigenetic regulators in NPC development and suggest a therapeutic potential of BET inhibitors in treating brain injuries and neurodegenerative diseases.

  10. Human Neural Precursor Cells Promote Neurologic Recovery in a Viral Model of Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Lu Chen

    2014-06-01

    Full Text Available Using a viral model of the demyelinating disease multiple sclerosis (MS, we show that intraspinal transplantation of human embryonic stem cell-derived neural precursor cells (hNPCs results in sustained clinical recovery, although hNPCs were not detectable beyond day 8 posttransplantation. Improved motor skills were associated with a reduction in neuroinflammation, decreased demyelination, and enhanced remyelination. Evidence indicates that the reduced neuroinflammation is correlated with an increased number of CD4+CD25+FOXP3+ regulatory T cells (Tregs within the spinal cords. Coculture of hNPCs with activated T cells resulted in reduced T cell proliferation and increased Treg numbers. The hNPCs acted, in part, through secretion of TGF-β1 and TGF-β2. These findings indicate that the transient presence of hNPCs transplanted in an animal model of MS has powerful immunomodulatory effects and mediates recovery. Further investigation of the restorative effects of hNPC transplantation may aid in the development of clinically relevant MS treatments.

  11. Antenatal dexamethasone before asphyxia promotes cystic neural injury in preterm fetal sheep by inducing hyperglycemia.

    Science.gov (United States)

    Lear, Christopher A; Davidson, Joanne O; Mackay, Georgia R; Drury, Paul P; Galinsky, Robert; Quaedackers, Josine S; Gunn, Alistair J; Bennet, Laura

    2018-04-01

    Antenatal glucocorticoid therapy significantly improves the short-term systemic outcomes of prematurely born infants, but there is limited information available on their impact on neurodevelopmental outcomes in at-risk preterm babies exposed to perinatal asphyxia. Preterm fetal sheep (0.7 of gestation) were exposed to a maternal injection of 12 mg dexamethasone or saline followed 4 h later by asphyxia induced by 25 min of complete umbilical cord occlusion. In a subsequent study, fetuses received titrated glucose infusions followed 4 h later by asphyxia to examine the hypothesis that hyperglycemia mediated the effects of dexamethasone. Post-mortems were performed 7 days after asphyxia for cerebral histology. Maternal dexamethasone before asphyxia was associated with severe, cystic brain injury compared to diffuse injury after saline injection, with increased numbers of seizures, worse recovery of brain activity, and increased arterial glucose levels before, during, and after asphyxia. Glucose infusions before asphyxia replicated these adverse outcomes, with a strong correlation between greater increases in glucose before asphyxia and greater neural injury. These findings strongly suggest that dexamethasone exposure and hyperglycemia can transform diffuse injury into cystic brain injury after asphyxia in preterm fetal sheep.

  12. The Role of Open and Distance Learning in Promoting Professional ...

    African Journals Online (AJOL)

    This paper unveils the unique role played by ODL in promoting professional training and development in Tanzania. ODL is significantly increasing in its importance in most societies if not all; this is justified by the increasing in enrolment in ODL institutions. In order to cope with the demanding world, individuals need to ...

  13. Exploring Learning Outcomes of School-based Health Promotion

    DEFF Research Database (Denmark)

    Carlsson, Monica Susanne; Simovska, Venka

    2012-01-01

    This paper discusses the findings from a multiple case study of a European health promotion project - ‘Shape Up – a school-community approach to influencing determinants of a healthy and balanced growing up’. The project sought to develop children’s capacity to critically explore and act to improve...

  14. Validating the Use of Deep Learning Neural Networks for Correction of Large Hydrometric Datasets

    Science.gov (United States)

    Frazier, N.; Ogden, F. L.; Regina, J. A.; Cheng, Y.

    2017-12-01

    Collection and validation of Earth systems data can be time consuming and labor intensive. In particular, high resolution hydrometric data, including rainfall and streamflow measurements, are difficult to obtain due to a multitude of complicating factors. Measurement equipment is subject to clogs, environmental disturbances, and sensor drift. Manual intervention is typically required to identify, correct, and validate these data. Weirs can become clogged and the pressure transducer may float or drift over time. We typically employ a graphical tool called Time Series Editor to manually remove clogs and sensor drift from the data. However, this process is highly subjective and requires hydrological expertise. Two different people may produce two different data sets. To use this data for scientific discovery and model validation, a more consistent method is needed to processes this field data. Deep learning neural networks have proved to be excellent mechanisms for recognizing patterns in data. We explore the use of Recurrent Neural Networks (RNN) to capture the patterns in the data over time using various gating mechanisms (LSTM and GRU), network architectures, and hyper-parameters to build an automated data correction model. We also explore the required amount of manually corrected training data required to train the network for reasonable accuracy. The benefits of this approach are that the time to process a data set is significantly reduced, and the results are 100% reproducible after training is complete. Additionally, we train the RNN and calibrate a physically-based hydrological model against the same portion of data. Both the RNN and the model are applied to the remaining data using a split-sample methodology. Performance of the machine learning is evaluated for plausibility by comparing with the output of the hydrological model, and this analysis identifies potential periods where additional investigation is warranted.

  15. Hydroponic Garden Promotes Hands-on Learning, Healthy Eating

    Science.gov (United States)

    Anderson, Melinda; Swafford, Melinda

    2011-01-01

    The Carl D. Perkins Career Technical Improvement Act of 2006 encourages integration of academic instruction to improve student learning, impact employment skills of students, and enhance problem-solving skills by using authentic real-world situations. Academic integration is accomplished by integrating concepts of English, math, science,…

  16. Promoting Inclusion, Social Connections, and Learning through Peer Support Arrangements

    Science.gov (United States)

    Carter, Erik W.; Moss, Colleen K.; Asmus, Jennifer; Fesperman, Ethan; Cooney, Molly; Brock, Matthew E.; Lyons, Gregory; Huber, Heartley B.; Vincent, Lori B.

    2015-01-01

    Ensuring students with severe disabilities access the rich relationship and learning opportunities available within general education classrooms is an important--but challenging--endeavor. Although one-to-one paraprofessionals often accompany students in inclusive classrooms and provide extensive assistance, the constant presence of an adult can…

  17. How Can Innovative Learning Environments Promote the Diffusion of Innovation?

    Science.gov (United States)

    Osborne, Mark

    2016-01-01

    Schools implementing innovative learning environments (ILEs) face many challenges, including the need to discard previously cherished practices and behaviours, adjust mindsets, and invent successful new ways of operating. Leaders can support these processes by implementing structures that: i) support ongoing, distributed, participatory innovation;…

  18. Engaging Students in Learning Science through Promoting Creative Reasoning

    Science.gov (United States)

    Waldrip, Bruce; Prain, Vaughan

    2017-01-01

    Student engagement in learning science is both a desirable goal and a long-standing teacher challenge. Moving beyond engagement understood as transient topic interest, we argue that cognitive engagement entails sustained interaction in the processes of how knowledge claims are generated, judged, and shared in this subject. In this paper, we…

  19. How Service and Learning Came Together to Promote "Cura Personalis"

    Science.gov (United States)

    Wright, Ann; Calabrese, Nicki; Henry, Julie

    2009-01-01

    "Cura personalis" (care of the individual) represents one of the core ideals of all Jesuit colleges and universities. At one urban Jesuit college, faculty members of The School of Education and Human Services and The College of Arts and Sciences initiated a service-learning project in a freshman level pedagogical core course. One goal of the…

  20. Promoting Vicarious Learning of Physics Using Deep Questions with Explanations

    Science.gov (United States)

    Craig, Scotty D.; Gholson, Barry; Brittingham, Joshua K.; Williams, Joah L.; Shubeck, Keith T.

    2012-01-01

    Two experiments explored the role of vicarious "self" explanations in facilitating student learning gains during computer-presented instruction. In Exp. 1, college students with low or high knowledge on Newton's laws were tested in four conditions: (a) monologue (M), (b) questions (Q), (c) explanation (E), and (d) question + explanation (Q + E).…

  1. Using teacher action research to promote constructivist learning ...

    African Journals Online (AJOL)

    The primary focus was to assist South African teachers to become reflective practitioners in their daily mathematics classroom teaching. The study involved a combination of quantitative and qualitative research methods. Quantitative data were collected using the Constructivist Learning Environment Survey (CLES) to ...

  2. Promoting Stakeholder Participation in a Learning-Based Monitoring ...

    African Journals Online (AJOL)

    It is result-oriented and aims to enhance control and efficiency (Morgan, 2005). However ... Outcome Mapping as a Learning-Oriented Project Cycle Management Framework .... Therefore, a qualitative case-study design was selected as .... college and colleagues admitting embarrassment [for failing to do what was agreed.

  3. Involving postgraduate's students in undergraduate small group teaching promotes active learning in both

    Science.gov (United States)

    Kalra, Ruchi; Modi, Jyoti Nath; Vyas, Rashmi

    2015-01-01

    Background: Lecture is a common traditional method for teaching, but it may not stimulate higher order thinking and students may also be hesitant to express and interact. The postgraduate (PG) students are less involved with undergraduate (UG) teaching. Team based small group active learning method can contribute to better learning experience. Aim: To-promote active learning skills among the UG students using small group teaching methods involving PG students as facilitators to impart hands-on supervised training in teaching and managerial skills. Methodology: After Institutional approval under faculty supervision 92 UGs and 8 PGs participated in 6 small group sessions utilizing the jigsaw technique. Feedback was collected from both. Observations: Undergraduate Feedback (Percentage of Students Agreed): Learning in small groups was a good experience as it helped in better understanding of the subject (72%), students explored multiple reading resources (79%), they were actively involved in self-learning (88%), students reported initial apprehension of performance (71%), identified their learning gaps (86%), team enhanced their learning process (71%), informal learning in place of lecture was a welcome change (86%), it improved their communication skills (82%), small group learning can be useful for future self-learning (75%). Postgraduate Feedback: Majority performed facilitation for first time, perceived their performance as good (75%), it was helpful in self-learning (100%), felt confident of managing students in small groups (100%), as facilitator they improved their teaching skills, found it more useful and better identified own learning gaps (87.5%). Conclusions: Learning in small groups adopting team based approach involving both UGs and PGs promoted active learning in both and enhanced the teaching skills of the PGs. PMID:26380201

  4. Promoting Cooperative Learning in the Classroom: Comparing Explicit and Implicit Training Techniques

    Directory of Open Access Journals (Sweden)

    Anne Elliott

    2003-07-01

    Full Text Available In this study, we investigated whether providing 4th and 5th-grade students with explicit instruction in prerequisite cooperative-learning skills and techniques would enhance their academic performance and promote in them positive attitudes towards cooperative learning. Overall, students who received explicit training outperformed their peers on both the unit project and test and presented more favourable attitudes towards cooperative learning. The findings of this study support the use of explicitly instructing students about the components of cooperative learning prior to engaging in collaborative activities. Implications for teacher-education are discussed.

  5. Cyclosporin A-Mediated Activation of Endogenous Neural Precursor Cells Promotes Cognitive Recovery in a Mouse Model of Stroke

    Directory of Open Access Journals (Sweden)

    Labeeba Nusrat

    2018-04-01

    Full Text Available Cognitive dysfunction following stroke significantly impacts quality of life and functional independance; yet, despite the prevalence and negative impact of cognitive deficits, post-stroke interventions almost exclusively target motor impairments. As a result, current treatment options are limited in their ability to promote post-stroke cognitive recovery. Cyclosporin A (CsA has been previously shown to improve post-stroke functional recovery of sensorimotor deficits. Interestingly, CsA is a commonly used immunosuppressant and also acts directly on endogenous neural precursor cells (NPCs in the neurogenic regions of the brain (the periventricular region and the dentate gyrus. The immunosuppressive and NPC activation effects are mediated by calcineurin-dependent and calcineurin-independent pathways, respectively. To develop a cognitive stroke model, focal bilateral lesions were induced in the medial prefrontal cortex (mPFC of adult mice using endothelin-1. First, we characterized this stroke model in the acute and chronic phase, using problem-solving and memory-based cognitive tests. mPFC stroke resulted in early and persistent deficits in short-term memory, problem-solving and behavioral flexibility, without affecting anxiety. Second, we investigated the effects of acute and chronic CsA treatment on NPC activation, neuroprotection, and tissue damage. Acute CsA administration post-stroke increased the size of the NPC pool. There was no effect on neurodegeneration or lesion volume. Lastly, we looked at the effects of chronic CsA treatment on cognitive recovery. Long-term CsA administration promoted NPC migration toward the lesion site and rescued cognitive deficits to control levels. This study demonstrates that CsA treatment activates the NPC population, promotes migration of NPCs to the site of injury, and leads to improved cognitive recovery following long-term treatment.

  6. Three-terminal ferroelectric synapse device with concurrent learning function for artificial neural networks

    International Nuclear Information System (INIS)

    Nishitani, Y.; Kaneko, Y.; Ueda, M.; Fujii, E.; Morie, T.

    2012-01-01

    Spike-timing-dependent synaptic plasticity (STDP) is demonstrated in a synapse device based on a ferroelectric-gate field-effect transistor (FeFET). STDP is a key of the learning functions observed in human brains, where the synaptic weight changes only depending on the spike timing of the pre- and post-neurons. The FeFET is composed of the stacked oxide materials with ZnO/Pr(Zr,Ti)O 3 (PZT)/SrRuO 3 . In the FeFET, the channel conductance can be altered depending on the density of electrons induced by the polarization of PZT film, which can be controlled by applying the gate voltage in a non-volatile manner. Applying a pulse gate voltage enables the multi-valued modulation of the conductance, which is expected to be caused by a change in PZT polarization. This variation depends on the height and the duration of the pulse gate voltage. Utilizing these characteristics, symmetric and asymmetric STDP learning functions are successfully implemented in the FeFET-based synapse device by applying the non-linear pulse gate voltage generated from a set of two pulses in a sampling circuit, in which the two pulses correspond to the spikes from the pre- and post-neurons. The three-terminal structure of the synapse device enables the concurrent learning, in which the weight update can be performed without canceling signal transmission among neurons, while the neural networks using the previously reported two-terminal synapse devices need to stop signal transmission for learning.

  7. Multi-Objective Reinforcement Learning-Based Deep Neural Networks for Cognitive Space Communications

    Science.gov (United States)

    Ferreria, Paulo Victor R.; Paffenroth, Randy; Wyglinski, Alexander M.; Hackett, Timothy M.; Bilen, Sven G.; Reinhart, Richard C.; Mortensen, Dale J.

    2017-01-01

    Future communication subsystems of space exploration missions can potentially benefit from software-defined radios (SDRs) controlled by machine learning algorithms. In this paper, we propose a novel hybrid radio resource allocation management control algorithm that integrates multi-objective reinforcement learning and deep artificial neural networks. The objective is to efficiently manage communications system resources by monitoring performance functions with common dependent variables that result in conflicting goals. The uncertainty in the performance of thousands of different possible combinations of radio parameters makes the trade-off between exploration and exploitation in reinforcement learning (RL) much more challenging for future critical space-based missions. Thus, the system should spend as little time as possible on exploring actions, and whenever it explores an action, it should perform at acceptable levels most of the time. The proposed approach enables on-line learning by interactions with the environment and restricts poor resource allocation performance through virtual environment exploration. Improvements in the multiobjective performance can be achieved via transmitter parameter adaptation on a packet-basis, with poorly predicted performance promptly resulting in rejected decisions. Simulations presented in this work considered the DVB-S2 standard adaptive transmitter parameters and additional ones expected to be present in future adaptive radio systems. Performance results are provided by analysis of the proposed hybrid algorithm when operating across a satellite communication channel from Earth to GEO orbit during clear sky conditions. The proposed approach constitutes part of the core cognitive engine proof-of-concept to be delivered to the NASA Glenn Research Center SCaN Testbed located onboard the International Space Station.

  8. Promoting renewable energy: Lessons learned from 20 years of experimentation

    DEFF Research Database (Denmark)

    Haas, Reinhard; Meyer, Niels I; Held, Anne

    2008-01-01

    Currently, the promotion of electricity generated from Renewable Energy Sources (RES-E) has gained high priority in the energy policy strategies of many countries world-wide. Since RES-E contribute to climate protection and security of electricity supply their market deployment has been supported...... since the first oil price shock in 1973. A wide range of strategies has been implemented during this time in different countries to increase the share of RES-E. However, historic development of electricity generated from RES-E was characterised by large differences between the countries due to national...... of European renewable legislation. The most important conclusions of this analysis are: (i) regardless of which strategy is chosen, it is of superior relevance that there is a clear focus on the promotion of newly installed plants; (ii) currently, a well-designed (dynamic) feed-in tariff system provides...

  9. Engaging students in learning science through promoting creative reasoning

    Science.gov (United States)

    Waldrip, Bruce; Prain, Vaughan

    2017-10-01

    Student engagement in learning science is both a desirable goal and a long-standing teacher challenge. Moving beyond engagement understood as transient topic interest, we argue that cognitive engagement entails sustained interaction in the processes of how knowledge claims are generated, judged, and shared in this subject. In this paper, we particularly focus on the initial claim-building aspect of this reasoning as a crucial phase in student engagement. In reviewing the literature on student reasoning and argumentation, we note that the well-established frameworks for claim-judging are not matched by accounts of creative reasoning in claim-building. We develop an exploratory framework to characterise and enact this reasoning to enhance engagement. We then apply this framework to interpret two lessons by two science teachers where they aimed to develop students' reasoning capabilities to support learning.

  10. Nonassociative learning promotes respiratory entrainment to mechanical ventilation.

    Directory of Open Access Journals (Sweden)

    Shawna M MacDonald

    Full Text Available BACKGROUND: Patient-ventilator synchrony is a major concern in critical care and is influenced by phasic lung-volume feedback control of the respiratory rhythm. Routine clinical application of positive end-expiratory pressure (PEEP introduces a tonic input which, if unopposed, might disrupt respiratory-ventilator entrainment through sustained activation of the vagally-mediated Hering-Breuer reflex. We suggest that this potential adverse effect may be averted by two differentiator forms of nonassociative learning (habituation and desensitization of the Hering-Breuer reflex via pontomedullary pathways. METHODOLOGY/PRINCIPAL FINDINGS: We tested these hypotheses in 17 urethane-anesthetized adult Sprague-Dawley rats under controlled mechanical ventilation. Without PEEP, phrenic discharge was entrained 1:1 to the ventilator rhythm. Application of PEEP momentarily dampened the entrainment to higher ratios but this effect was gradually adapted by nonassociative learning. Bilateral electrolytic lesions of the pneumotaxic center weakened the adaptation to PEEP, whereas sustained stimulation of the pneumotaxic center weakened the entrainment independent of PEEP. In all cases, entrainment was abolished after vagotomy. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate an important functional role for pneumotaxic desensitization and extra-pontine habituation of the Hering-Breuer reflex elicited by lung inflation: acting as buffers or high-pass filters against tonic vagal volume input, these differentiator forms of nonassociative learning help to restore respiratory-ventilator entrainment in the face of PEEP. Such central sites-specific habituation and desensitization of the Hering-Breuer reflex provide a useful experimental model of nonassociative learning in mammals that is of particular significance in understanding respiratory rhythmogenesis and coupled-oscillator entrainment mechanisms, and in the clinical management of mechanical ventilation in

  11. Dyadic Instruction for Middle School Students: Liking Promotes Learning

    OpenAIRE

    Hartl, Amy C.; DeLay, Dawn; Laursen, Brett; Denner, Jill; Werner, Linda; Campe, Shannon; Ortiz, Eloy

    2015-01-01

    This study examines whether friendship facilitates or hinders learning in a dyadic instructional setting. Working in 80 same-sex pairs, 160 (60 girls, 100 boys) middle school students (M = 12.13 years old) were taught a new computer programming language and programmed a game. Students spent 14 to 30 (M = 22.7) hours in a programming class. At the beginning and the end of the project, each participant separately completed (a) computer programming knowledge assessments and (b) questionnaires ra...

  12. Can Creative Podcasting Promote Deep Learning? The Use of Podcasting for Learning Content in an Undergraduate Science Unit

    Science.gov (United States)

    Pegrum, Mark; Bartle, Emma; Longnecker, Nancy

    2015-01-01

    This paper examines the effect of a podcasting task on the examination performance of several hundred first-year chemistry undergraduate students. Educational researchers have established that a deep approach to learning that promotes active understanding of meaning can lead to better student outcomes, higher grades and superior retention of…

  13. Morphine Reward Promotes Cue-Sensitive Learning: Implication of Dorsal Striatal CREB Activity

    Directory of Open Access Journals (Sweden)

    Mathieu Baudonnat

    2017-05-01

    Full Text Available Different parallel neural circuits interact and may even compete to process and store information: whereas stimulus–response (S–R learning critically depends on the dorsal striatum (DS, spatial memory relies on the hippocampus (HPC. Strikingly, despite its potential importance for our understanding of addictive behaviors, the impact of drug rewards on memory systems dynamics has not been extensively studied. Here, we assessed long-term effects of drug- vs food reinforcement on the subsequent use of S–R vs spatial learning strategies and their neural substrates. Mice were trained in a Y-maze cue-guided task, during which either food or morphine injections into the ventral tegmental area (VTA were used as rewards. Although drug- and food-reinforced mice learned the Y-maze task equally well, drug-reinforced mice exhibited a preferential use of an S–R learning strategy when tested in a water-maze competition task designed to dissociate cue-based and spatial learning. This cognitive bias was associated with a persistent increase in the phosphorylated form of cAMP response element-binding protein phosphorylation (pCREB within the DS, and a decrease of pCREB expression in the HPC. Pharmacological inhibition of striatal PKA pathway in drug-rewarded mice limited the morphine-induced increase in levels of pCREB in DS and restored a balanced use of spatial vs cue-based learning. Our findings suggest that drug (opiate reward biases the engagement of separate memory systems toward a predominant use of the cue-dependent system via an increase in learning-related striatal pCREB activity. Persistent functional imbalance between striatal and hippocampal activity could contribute to the persistence of addictive behaviors, or counteract the efficiency of pharmacological or psychotherapeutic treatments.

  14. Organic cation transporter-mediated ergothioneine uptake in mouse neural progenitor cells suppresses proliferation and promotes differentiation into neurons.

    Directory of Open Access Journals (Sweden)

    Takahiro Ishimoto

    Full Text Available The aim of the present study is to clarify the functional expression and physiological role in neural progenitor cells (NPCs of carnitine/organic cation transporter OCTN1/SLC22A4, which accepts the naturally occurring food-derived antioxidant ergothioneine (ERGO as a substrate in vivo. Real-time PCR analysis revealed that mRNA expression of OCTN1 was much higher than that of other organic cation transporters in mouse cultured cortical NPCs. Immunocytochemical analysis showed colocalization of OCTN1 with the NPC marker nestin in cultured NPCs and mouse embryonic carcinoma P19 cells differentiated into neural progenitor-like cells (P19-NPCs. These cells exhibited time-dependent [(3H]ERGO uptake. These results demonstrate that OCTN1 is functionally expressed in murine NPCs. Cultured NPCs and P19-NPCs formed neurospheres from clusters of proliferating cells in a culture time-dependent manner. Exposure of cultured NPCs to ERGO or other antioxidants (edaravone and ascorbic acid led to a significant decrease in the area of neurospheres with concomitant elimination of intracellular reactive oxygen species. Transfection of P19-NPCs with small interfering RNA for OCTN1 markedly promoted formation of neurospheres with a concomitant decrease of [(3H]ERGO uptake. On the other hand, exposure of cultured NPCs to ERGO markedly increased the number of cells immunoreactive for the neuronal marker βIII-tubulin, but decreased the number immunoreactive for the astroglial marker glial fibrillary acidic protein (GFAP, with concomitant up-regulation of neuronal differentiation activator gene Math1. Interestingly, edaravone and ascorbic acid did not affect such differentiation of NPCs, in contrast to the case of proliferation. Knockdown of OCTN1 increased the number of cells immunoreactive for GFAP, but decreased the number immunoreactive for βIII-tubulin, with concomitant down-regulation of Math1 in P19-NPCs. Thus, OCTN1-mediated uptake of ERGO in NPCs inhibits

  15. Where's the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network.

    Directory of Open Access Journals (Sweden)

    Christoph Hartmann

    2015-12-01

    Full Text Available Even in the absence of sensory stimulation the brain is spontaneously active. This background "noise" seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN, which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural

  16. Towards a lifelong learning society through reading promotion: Opportunities and challenges for libraries and community learning centres in Viet Nam

    Science.gov (United States)

    Hossain, Zakir

    2016-04-01

    The government of Viet Nam has made a commitment to build a Lifelong Learning Society by 2020. A range of related initiatives have been launched, including the Southeast Asian Ministers of Education Organization Centre for Lifelong Learning (SEAMEO CELLL) and "Book Day" - a day aimed at encouraging reading and raising awareness of its importance for the development of knowledge and skills. Viet Nam also aims to implement lifelong learning (LLL) activities in libraries, museums, cultural centres and clubs. The government of Viet Nam currently operates more than 11,900 Community Learning Centres (CLCs) and is in the process of both renovating and innovating public libraries and museums throughout the country. In addition to the work undertaken by the Viet Nam government, a number of enterprises have been initiated by non-governmental organisations and non-profit organisations to promote literacy and lifelong learning. This paper investigates some government initiatives focused on libraries and CLCs and their impact on reading promotion. Proposing a way forward, the paper confirms that Viet Nam's libraries and CLCs play an essential role in promoting reading and building a LLL Society.

  17. Transferring and generalizing deep-learning-based neural encoding models across subjects.

    Science.gov (United States)

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-08-01

    Recent studies have shown the value of using deep learning models for mapping and characterizing how the brain represents and organizes information for natural vision. However, modeling the relationship between deep learning models and the brain (or encoding models), requires measuring cortical responses to large and diverse sets of natural visual stimuli from single subjects. This requirement limits prior studies to few subjects, making it difficult to generalize findings across subjects or for a population. In this study, we developed new methods to transfer and generalize encoding models across subjects. To train encoding models specific to a target subject, the models trained for other subjects were used as the prior models and were refined efficiently using Bayesian inference with a limited amount of data from the target subject. To train encoding models for a population, the models were progressively trained and updated with incremental data from different subjects. For the proof of principle, we applied these methods to functional magnetic resonance imaging (fMRI) data from three subjects watching tens of hours of naturalistic videos, while a deep residual neural network driven by image recognition was used to model visual cortical processing. Results demonstrate that the methods developed herein provide an efficient and effective strategy to establish both subject-specific and population-wide predictive models of cortical representations of high-dimensional and hierarchical visual features. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. CAPES: Unsupervised Storage Performance Tuning Using Neural Network-Based Deep Reinforcement Learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Parameter tuning is an important task of storage performance optimization. Current practice usually involves numerous tweak-benchmark cycles that are slow and costly. To address this issue, we developed CAPES, a model-less deep reinforcement learning-based unsupervised parameter tuning system driven by a deep neural network (DNN). It is designed to nd the optimal values of tunable parameters in computer systems, from a simple client-server system to a large data center, where human tuning can be costly and often cannot achieve optimal performance. CAPES takes periodic measurements of a target computer system’s state, and trains a DNN which uses Q-learning to suggest changes to the system’s current parameter values. CAPES is minimally intrusive, and can be deployed into a production system to collect training data and suggest tuning actions during the system’s daily operation. Evaluation of a prototype on a Lustre system demonstrates an increase in I/O throughput up to 45% at saturation point. About the...

  19. Classification of amyotrophic lateral sclerosis disease based on convolutional neural network and reinforcement sample learning algorithm.

    Science.gov (United States)

    Sengur, Abdulkadir; Akbulut, Yaman; Guo, Yanhui; Bajaj, Varun

    2017-12-01

    Electromyogram (EMG) signals contain useful information of the neuromuscular diseases like amyotrophic lateral sclerosis (ALS). ALS is a well-known brain disease, which can progressively degenerate the motor neurons. In this paper, we propose a deep learning based method for efficient classification of ALS and normal EMG signals. Spectrogram, continuous wavelet transform (CWT), and smoothed pseudo Wigner-Ville distribution (SPWVD) have been employed for time-frequency (T-F) representation of EMG signals. A convolutional neural network is employed to classify these features. In it, Two convolution layers, two pooling layer, a fully connected layer and a lost function layer is considered in CNN architecture. The CNN architecture is trained with the reinforcement sample learning strategy. The efficiency of the proposed implementation is tested on publicly available EMG dataset. The dataset contains 89 ALS and 133 normal EMG signals with 24 kHz sampling frequency. Experimental results show 96.80% accuracy. The obtained results are also compared with other methods, which show the superiority of the proposed method.

  20. Incipient fault detection and identification in process systems using accelerating neural network learning

    International Nuclear Information System (INIS)

    Parlos, A.G.; Muthusami, J.; Atiya, A.F.

    1994-01-01

    The objective of this paper is to present the development and numerical testing of a robust fault detection and identification (FDI) system using artificial neural networks (ANNs), for incipient (slowly developing) faults occurring in process systems. The challenge in using ANNs in FDI systems arises because of one's desire to detect faults of varying severity, faults from noisy sensors, and multiple simultaneous faults. To address these issues, it becomes essential to have a learning algorithm that ensures quick convergence to a high level of accuracy. A recently developed accelerated learning algorithm, namely a form of an adaptive back propagation (ABP) algorithm, is used for this purpose. The ABP algorithm is used for the development of an FDI system for a process composed of a direct current motor, a centrifugal pump, and the associated piping system. Simulation studies indicate that the FDI system has significantly high sensitivity to incipient fault severity, while exhibiting insensitivity to sensor noise. For multiple simultaneous faults, the FDI system detects the fault with the predominant signature. The major limitation of the developed FDI system is encountered when it is subjected to simultaneous faults with similar signatures. During such faults, the inherent limitation of pattern-recognition-based FDI methods becomes apparent. Thus, alternate, more sophisticated FDI methods become necessary to address such problems. Even though the effectiveness of pattern-recognition-based FDI methods using ANNs has been demonstrated, further testing using real-world data is necessary

  1. Mass detection in digital breast tomosynthesis data using convolutional neural networks and multiple instance learning.

    Science.gov (United States)

    Yousefi, Mina; Krzyżak, Adam; Suen, Ching Y

    2018-05-01

    Digital breast tomosynthesis (DBT) was developed in the field of breast cancer screening as a new tomographic technique to minimize the limitations of conventional digital mammography breast screening methods. A computer-aided detection (CAD) framework for mass detection in DBT has been developed and is described in this paper. The proposed framework operates on a set of two-dimensional (2D) slices. With plane-to-plane analysis on corresponding 2D slices from each DBT, it automatically learns complex patterns of 2D slices through a deep convolutional neural network (DCNN). It then applies multiple instance learning (MIL) with a randomized trees approach to classify DBT images based on extracted information from 2D slices. This CAD framework was developed and evaluated using 5040 2D image slices derived from 87 DBT volumes. The empirical results demonstrate that this proposed CAD framework achieves much better performance than CAD systems that use hand-crafted features and deep cardinality-restricted Bolzmann machines to detect masses in DBTs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Neural robust stabilization via event-triggering mechanism and adaptive learning technique.

    Science.gov (United States)

    Wang, Ding; Liu, Derong

    2018-06-01

    The robust control synthesis of continuous-time nonlinear systems with uncertain term is investigated via event-triggering mechanism and adaptive critic learning technique. We mainly focus on combining the event-triggering mechanism with adaptive critic designs, so as to solve the nonlinear robust control problem. This can not only make better use of computation and communication resources, but also conduct controller design from the view of intelligent optimization. Through theoretical analysis, the nonlinear robust stabilization can be achieved by obtaining an event-triggered optimal control law of the nominal system with a newly defined cost function and a certain triggering condition. The adaptive critic technique is employed to facilitate the event-triggered control design, where a neural network is introduced as an approximator of the learning phase. The performance of the event-triggered robust control scheme is validated via simulation studies and comparisons. The present method extends the application domain of both event-triggered control and adaptive critic control to nonlinear systems possessing dynamical uncertainties. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Promoting sustainable living in the borderless world through blended learning platforms

    Directory of Open Access Journals (Sweden)

    Khar Thoe Ng

    2013-11-01

    Full Text Available Student-centred learning approaches like collaborative learning are needed to facilitate meaningful learning among self-motivated lifelong learners within educational institutions through interorganizational Open and Distant Learning (ODL approaches. The purpose of this study is to develop blended learning platforms to promote sustainable living, building on an e-hub with sub-portals in SEARCH to facilitate activities such as “Education for Sustainable Development” (ESD, webinars, authentic learning, and the role of m-/e-learning. Survey questionnaires and mixed-research approach with mixed-mode of data analysis were used including some survey findings of in-service teachers’ understanding and attitudes towards ESD and three essential skills for sustainable living. Case studies were reported in telecollaborative project on “Disaster Risk Reduction Education” (DR RED in Malaysia, Germany and Philippines. These activities were organized internationally to facilitate communication through e-platforms among participants across national borders using digital tools to build relationships, promote students’ Higher Order Thinking (HOT skills and innate ability to learn independently.

  4. An Attentional Goldilocks Effect: An Optimal Amount of Social Interactivity Promotes Word Learning from Video.

    Science.gov (United States)

    Nussenbaum, Kate; Amso, Dima

    2016-01-01

    Television can be a powerful education tool; however, content-makers must understand the factors that engage attention and promote learning from screen media. Prior research suggests that social engagement is critical for learning and that interactivity may enhance the educational quality of children's media. The present study examined the effects of increasing the social interactivity of television on children's visual attention and word learning. Three- to 5-year-old ( M Age = 4;5 years, SD = 9 months) children completed a task in which they viewed videos of an actress teaching them the Swahili label for an on-screen image. Each child viewed these video clips in four conditions that parametrically manipulated social engagement and interactivity. We then tested whether each child had successfully learned the Swahili labels. Though 5-year-old children were able to learn words in all conditions, we found that there was an optimal level of social engagement that best supported learning for all participants, defined by engaging the child but not distracting from word labeling. Our eye-tracking data indicated that children in this condition spent more time looking at the target image and less time looking at the actress's face as compared to the most interactive condition. These findings suggest that social interactivity is critical to engaging attention and promoting learning from screen media up until a certain point, after which social stimuli may draw attention away from target images and impair children's word learning.

  5. Supporting inquiry learning by promoting normative understanding of multivariable causality

    Science.gov (United States)

    Keselman, Alla

    2003-11-01

    Early adolescents may lack the cognitive and metacognitive skills necessary for effective inquiry learning. In particular, they are likely to have a nonnormative mental model of multivariable causality in which effects of individual variables are neither additive nor consistent. Described here is a software-based intervention designed to facilitate students' metalevel and performance-level inquiry skills by enhancing their understanding of multivariable causality. Relative to an exploration-only group, sixth graders who practiced predicting an outcome (earthquake risk) based on multiple factors demonstrated increased attention to evidence, improved metalevel appreciation of effective strategies, and a trend toward consistent use of a controlled comparison strategy. Sixth graders who also received explicit instruction in making predictions based on multiple factors showed additional improvement in their ability to compare multiple instances as a basis for inferences and constructed the most accurate knowledge of the system. Gains were maintained in transfer tasks. The cognitive skills and metalevel understanding examined here are essential to inquiry learning.

  6. The ageing phenome: caloric restriction and hormones promote neural cell survival, growth, and de-differentiation.

    Science.gov (United States)

    Timiras, Paola S; Yaghmaie, Farzin; Saeed, Omar; Thung, Elaine; Chinn, Garrett

    2005-01-01

    The phenome represents the observable properties of an organism that have developed under the continued influences of both genome and environmental factors. Phenotypic properties are expressed through the functions of cells, organs and body systems that operate optimally, close to equilibrium. In complex organisms, maintenance of the equilibrium is achieved by the interplay of several regulatory mechanisms. In the elderly, dynamic instability may lead to progressive loss of normal function, failure of adaptation and increased pathology. Extensive research (reported elsewhere in this journal) has demonstrated that genetic manipulations of endocrine signaling in flies, worms and mice increase longevity. Another effective strategy for prolonging the lifespan is caloric restriction: in data presented here, the persistence of estrogen-sensitive cells in the hypothalamus of caloric restricted 22-month-old female mice, may explain the persistence of reproductive function at an age, when reproductive function has long ceased in ad libitum fed controls. Still another strategy utilizes the effects of epidermal growth factor (EGF) to promote in vitro proliferation of neuroglia, astrocytes and oligodendrocytes. Their subsequent de-differentiation generates immature precursor cells potentially capable of differentiating into neuroblasts and neurons. These and other examples suggest that, in terms of functional outcomes, "the genome proposes but the phenome disposes".

  7. Evaluating the Visualization of What a Deep Neural Network Has Learned.

    Science.gov (United States)

    Samek, Wojciech; Binder, Alexander; Montavon, Gregoire; Lapuschkin, Sebastian; Muller, Klaus-Robert

    Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the "importance" of individual pixels with respect to the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets. Our main result is that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of the neural network performance.Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and

  8. Using repetitive transcranial magnetic stimulation to study the underlying neural mechanisms of human motor learning and memory.

    Science.gov (United States)

    Censor, Nitzan; Cohen, Leonardo G

    2011-01-01

    In the last two decades, there has been a rapid development in the research of the physiological brain mechanisms underlying human motor learning and memory. While conventional memory research performed on animal models uses intracellular recordings, microfusion of protein inhibitors to specific brain areas and direct induction of focal brain lesions, human research has so far utilized predominantly behavioural approaches and indirect measurements of neural activity. Repetitive transcranial magnetic stimulation (rTMS), a safe non-invasive brain stimulation technique, enables the study of the functional role of specific cortical areas by evaluating the behavioural consequences of selective modulation of activity (excitation or inhibition) on memory generation and consolidation, contributing to the understanding of the neural substrates of motor learning. Depending on the parameters of stimulation, rTMS can also facilitate learning processes, presumably through purposeful modulation of excitability in specific brain regions. rTMS has also been used to gain valuable knowledge regarding the timeline of motor memory formation, from initial encoding to stabilization and long-term retention. In this review, we summarize insights gained using rTMS on the physiological and neural mechanisms of human motor learning and memory. We conclude by suggesting possible future research directions, some with direct clinical implications.

  9. Superior Generalization Capability of Hardware-Learing Algorithm Developed for Self-Learning Neuron-MOS Neural Networks

    Science.gov (United States)

    Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro

    1995-02-01

    We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.

  10. Application of deep learning in determining IR precipitation occurrence: a Convolutional Neural Network model

    Science.gov (United States)

    Wang, C.; Hong, Y.

    2017-12-01

    Infrared (IR) information from Geostationary satellites can be used to retrieve precipitation at pretty high spatiotemporal resolutions. Traditional artificial intelligence (AI) methodologies, such as artificial neural networks (ANN), have been designed to build the relationship between near-surface precipitation and manually derived IR features in products including PERSIANN and PERSIANN-CCS. This study builds an automatic precipitation detection model based on IR data using Convolutional Neural Network (CNN) which is implemented by the newly developed deep learning framework, Caffe. The model judges whether there is rain or no rain at pixel level. Compared with traditional ANN methods, CNN can extract features inside the raw data automatically and thoroughly. In this study, IR data from GOES satellites and precipitation estimates from the next generation QPE (Q2) over the central United States are used as inputs and labels, respectively. The whole datasets during the study period (June to August in 2012) are randomly partitioned to three sub datasets (train, validation and test) to establish the model at the spatial resolution of 0.08°×0.08° and the temporal resolution of 1 hour. The experiments show great improvements of CNN in rain identification compared to the widely used IR-based precipitation product, i.e., PERSIANN-CCS. The overall gain in performance is about 30% for critical success index (CSI), 32% for probability of detection (POD) and 12% for false alarm ratio (FAR). Compared to other recent IR-based precipitation retrieval methods (e.g., PERSIANN-DL developed by University of California Irvine), our model is simpler with less parameters, but achieves equally or even better results. CNN has been applied in computer vision domain successfully, and our results prove the method is suitable for IR precipitation detection. Future studies can expand the application of CNN from precipitation occurrence decision to precipitation amount retrieval.

  11. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  12. Neural protein gamma-synuclein interacting with androgen receptor promotes human prostate cancer progression

    International Nuclear Information System (INIS)

    Chen, Junyi; Jiao, Li; Xu, Chuanliang; Yu, Yongwei; Zhang, Zhensheng; Chang, Zheng; Deng, Zhen; Sun, Yinghao

    2012-01-01

    Gamma-synuclein (SNCG) has previously been demonstrated to be significantly correlated with metastatic malignancies; however, in-depth investigation of SNCG in prostate cancer is still lacking. In the present study, we evaluated the role of SNCG in prostate cancer progression and explored the underlying mechanisms. First, alteration of SNCG expression in LNCaP cell line to test the ability of SNCG on cellular properties in vitro and vivo whenever exposing with androgen or not. Subsequently, the Dual-luciferase reporter assays were performed to evaluate whether the role of SNCG in LNCaP is through AR signaling. Last, the association between SNCG and prostate cancer progression was assessed immunohistochemically using a series of human prostate tissues. Silencing SNCG by siRNA in LNCaP cells contributes to the inhibition of cellular proliferation, the induction of cell-cycle arrest at the G1 phase, the suppression of cellular migration and invasion in vitro, as well as the decrease of tumor growth in vivo with the notable exception of castrated mice. Subsequently, mechanistic studies indicated that SNCG is a novel androgen receptor (AR) coactivator. It interacts with AR and promotes prostate cancer cellular growth and proliferation by activating AR transcription in an androgen-dependent manner. Finally, immunohistochemical analysis revealed that SNCG was almost undetectable in benign or androgen-independent tissues prostate lesions. The high expression of SNCG is correlated with peripheral and lymph node invasion. Our data suggest that SNCG may serve as a biomarker for predicting human prostate cancer progression and metastasis. It also may become as a novel target for biomedical therapy in advanced prostate cancer

  13. The Psychiatrist as Leader-Teacher: Promoting Learning Beyond Residency.

    Science.gov (United States)

    Waits, Wendi; Brent, Elizabeth

    2015-08-01

    In today's fast-paced, data-saturated, zero-tolerance practice environment, psychiatrists and other health care providers are expected to maintain clinical, fiscal, and administrative competence. The authors present a unique type of psychiatric leader—the leader-teacher—who incorporates teaching of these elements into day-to-day practice, enhancing lifelong learning for credentialed staff and increasing their confidence in managing complex clinical and administrative issues. Particular emphasis is placed on leader-teachers working in military environments. The article discusses the primary characteristics of this type of leader, including their tendency to (1) seek clarification, (2) distill information, (3) communicate guidance, and (4) catalogue products. The authors also address the advantages and disadvantages of being a leader-teacher and present several illustrative cases.

  14. Resveratrol promotes hUC-MSCs engraftment and neural repair in a mouse model of Alzheimer's disease.

    Science.gov (United States)

    Wang, Xinxin; Ma, Shanshan; Yang, Bo; Huang, Tuanjie; Meng, Nan; Xu, Ling; Xing, Qu; Zhang, Yanting; Zhang, Kun; Li, Qinghua; Zhang, Tao; Wu, Junwei; Yang, Greta Luyuan; Guan, Fangxia; Wang, Jian

    2018-02-26

    Mesenchymal stem cell transplantation is a promising therapeutic approach for Alzheimer's disease (AD). However, poor engraftment and limited survival rates are major obstacles for its clinical application. Resveratrol, an activator of silent information regulator 2, homolog 1 (SIRT1), regulates cell destiny and is beneficial for neurodegenerative disorders. The present study is designed to explore whether resveratrol regulates the fate of human umbilical cord-derived mesenchymal stem cells (hUC-MSCs) and whether hUC-MSCs combined with resveratrol would be efficacious in the treatment of neurodegeneration in a mouse model of AD through SIRT1 signaling. Herein, we report that resveratrol facilitates hUC-MSCs engraftment in the hippocampus of AD mice and resveratrol enhances the therapeutic effects of hUC-MSCs in this model as demonstrated by improved learning and memory in the Morris water maze, enhanced neurogenesis and alleviated neural apoptosis in the hippocampus of the AD mice. Moreover, hUC-MSCs and resveratrol jointly regulate expression of hippocampal SIRT1, PCNA, p53, ac-p53, p21, and p16. These data strongly suggests that hUC-MSCs transplantation combined with resveratrol may be an effective therapy for AD. Copyright © 2017. Published by Elsevier B.V.

  15. Teaching and learning community work online: can e-learning promote competences for future practice?

    OpenAIRE

    Larsen, Anne Karin; Visser-Rotgans, Rina; Hole, Grete Oline

    2011-01-01

    This article presents a case study of an online course in Community Work and the learning outcomes for an international group of students participating in the course. Examples from the process of, and results from the development of virtual-learning material are presented. Finally, the students' learning experience and competences achieved by the use of innovative learning material and ICT communication tools are presented.

  16. The Journal of Learning Analytics: Supporting and Promoting Learning Analytics Research

    OpenAIRE

    Siemens, George

    2014-01-01

    The paper gives a brief overview of the main activities for the development of the emerging field of learning analytics led by the Society for Learning Analytics Research (SoLAR). The place of the Journal of Learning Analytics is identified Analytics is the most significant new initiative of SoLAR. 

  17. The "Journal of Learning Analytics": Supporting and Promoting Learning Analytics Research

    Science.gov (United States)

    Siemens, George

    2014-01-01

    The paper gives a brief overview of the main activities for the development of the emerging field of learning analytics led by the Society for Learning Analytics Research (SoLAR). The place of the "Journal of Learning Analytics" is identified. Analytics is the most significant new initiative of SoLAR.

  18. promoting self directed learning in simulation based discovery learning environments through intelligent support.

    NARCIS (Netherlands)

    Veermans, K.H.; de Jong, Anthonius J.M.; van Joolingen, Wouter

    2000-01-01

    Providing learners with computer-generated feedback on their learning process in simulationbased discovery environments cannot be based on a detailed model of the learning process due to the “open” character of discovery learning. This paper describes a method for generating adaptive feedback for

  19. YF22 Model With On-Board On-Line Learning Microprocessors-Based Neural Algorithms for Autopilot and Fault-Tolerant Flight Control Systems

    National Research Council Canada - National Science Library

    Napolitano, Marcello

    2002-01-01

    This project focused on investigating the potential of on-line learning 'hardware-based' neural approximators and controllers to provide fault tolerance capabilities following sensor and actuator failures...

  20. THE DISTANCE EDUCATION TO PROMOTE CONTINUOUS LEARNING OF HEALTH PROFESSIONALS: REVIEW

    Directory of Open Access Journals (Sweden)

    Lívia Lima Ferraz

    2013-03-01

    Full Text Available The results of many articles and researches showed thatemploymenthavean importantrolefor continuous learning. The main factors that made possible this continuous education were:the technology information advanced and distance education flexibilities.Theevolutionofon-line continuing education helps the health care professionals development manyfundamental learning skillsas self-assessment and self-criticism.Therefore, this articlesobjective is to identify howpublic policiescould promote continuous learning of healthprofessionals through distance education(DEand the contributions of this education formatfor transformationhealth activities.In conclusion, the results were that distance education(DE was an important strategy for permanent education, because(DEdevelopments goodskills of learning and breaksterritories barriers. Wherefore, distance education became aneffective learning format