WorldWideScience

Sample records for hebbian learning rule

  1. Hebbian learning and predictive mirror neurons for actions, sensations and emotions

    NARCIS (Netherlands)

    Keysers, C.; Gazzola, Valeria

    2014-01-01

    Spike-timing-dependent plasticity is considered the neurophysiological basis of Hebbian learning and has been shown to be sensitive to both contingency and contiguity between pre- and postsynaptic activity. Here, we will examine how applying this Hebbian learning rule to a system of interconnected

  2. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    Science.gov (United States)

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  3. Hebbian learning and predictive mirror neurons for actions, sensations and emotions

    OpenAIRE

    Keysers, C.; Gazzola, Valeria

    2014-01-01

    Spike-timing-dependent plasticity is considered the neurophysiological basis of Hebbian learning and has been shown to be sensitive to both contingency and contiguity between pre- and postsynaptic activity. Here, we will examine how applying this Hebbian learning rule to a system of interconnected neurons in the presence of direct or indirect re-afference (e.g. seeing/hearing one's own actions) predicts the emergence of mirror neurons with predictive properties. In this framework, we analyse ...

  4. Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation.

    Science.gov (United States)

    Brito, Carlos S N; Gerstner, Wulfram

    2016-09-01

    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities.

  5. Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation.

    Directory of Open Access Journals (Sweden)

    Carlos S N Brito

    2016-09-01

    Full Text Available The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities.

  6. Hebbian learning and predictive mirror neurons for actions, sensations and emotions.

    Science.gov (United States)

    Keysers, Christian; Gazzola, Valeria

    2014-01-01

    Spike-timing-dependent plasticity is considered the neurophysiological basis of Hebbian learning and has been shown to be sensitive to both contingency and contiguity between pre- and postsynaptic activity. Here, we will examine how applying this Hebbian learning rule to a system of interconnected neurons in the presence of direct or indirect re-afference (e.g. seeing/hearing one's own actions) predicts the emergence of mirror neurons with predictive properties. In this framework, we analyse how mirror neurons become a dynamic system that performs active inferences about the actions of others and allows joint actions despite sensorimotor delays. We explore how this system performs a projection of the self onto others, with egocentric biases to contribute to mind-reading. Finally, we argue that Hebbian learning predicts mirror-like neurons for sensations and emotions and review evidence for the presence of such vicarious activations outside the motor system.

  7. Hebbian errors in learning: an analysis using the Oja model.

    Science.gov (United States)

    Rădulescu, Anca; Cox, Kingsley; Adams, Paul

    2009-06-21

    Recent work on long term potentiation in brain slices shows that Hebb's rule is not completely synapse-specific, probably due to intersynapse diffusion of calcium or other factors. We previously suggested that such errors in Hebbian learning might be analogous to mutations in evolution. We examine this proposal quantitatively, extending the classical Oja unsupervised model of learning by a single linear neuron to include Hebbian inspecificity. We introduce an error matrix E, which expresses possible crosstalk between updating at different connections. When there is no inspecificity, this gives the classical result of convergence to the first principal component of the input distribution (PC1). We show the modified algorithm converges to the leading eigenvector of the matrix EC, where C is the input covariance matrix. In the most biologically plausible case when there are no intrinsically privileged connections, E has diagonal elements Q and off-diagonal elements (1-Q)/(n-1), where Q, the quality, is expected to decrease with the number of inputs n and with a synaptic parameter b that reflects synapse density, calcium diffusion, etc. We study the dependence of the learning accuracy on b, n and the amount of input activity or correlation (analytically and computationally). We find that accuracy increases (learning becomes gradually less useful) with increases in b, particularly for intermediate (i.e., biologically realistic) correlation strength, although some useful learning always occurs up to the trivial limit Q=1/n. We discuss the relation of our results to Hebbian unsupervised learning in the brain. When the mechanism lacks specificity, the network fails to learn the expected, and typically most useful, result, especially when the input correlation is weak. Hebbian crosstalk would reflect the very high density of synapses along dendrites, and inevitably degrades learning.

  8. Associative (not Hebbian) learning and the mirror neuron system.

    Science.gov (United States)

    Cooper, Richard P; Cook, Richard; Dickinson, Anthony; Heyes, Cecilia M

    2013-04-12

    The associative sequence learning (ASL) hypothesis suggests that sensorimotor experience plays an inductive role in the development of the mirror neuron system, and that it can play this crucial role because its effects are mediated by learning that is sensitive to both contingency and contiguity. The Hebbian hypothesis proposes that sensorimotor experience plays a facilitative role, and that its effects are mediated by learning that is sensitive only to contiguity. We tested the associative and Hebbian accounts by computational modelling of automatic imitation data indicating that MNS responsivity is reduced more by contingent and signalled than by non-contingent sensorimotor training (Cook et al. [7]). Supporting the associative account, we found that the reduction in automatic imitation could be reproduced by an existing interactive activation model of imitative compatibility when augmented with Rescorla-Wagner learning, but not with Hebbian or quasi-Hebbian learning. The work argues for an associative, but against a Hebbian, account of the effect of sensorimotor training on automatic imitation. We argue, by extension, that associative learning is potentially sufficient for MNS development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  9. Dynamic Hebbian Cross-Correlation Learning Resolves the Spike Timing Dependent Plasticity Conundrum

    Directory of Open Access Journals (Sweden)

    Tjeerd V. olde Scheper

    2018-01-01

    Full Text Available Spike Timing-Dependent Plasticity has been found to assume many different forms. The classic STDP curve, with one potentiating and one depressing window, is only one of many possible curves that describe synaptic learning using the STDP mechanism. It has been shown experimentally that STDP curves may contain multiple LTP and LTD windows of variable width, and even inverted windows. The underlying STDP mechanism that is capable of producing such an extensive, and apparently incompatible, range of learning curves is still under investigation. In this paper, it is shown that STDP originates from a combination of two dynamic Hebbian cross-correlations of local activity at the synapse. The correlation of the presynaptic activity with the local postsynaptic activity is a robust and reliable indicator of the discrepancy between the presynaptic neuron and the postsynaptic neuron's activity. The second correlation is between the local postsynaptic activity with dendritic activity which is a good indicator of matching local synaptic and dendritic activity. We show that this simple time-independent learning rule can give rise to many forms of the STDP learning curve. The rule regulates synaptic strength without the need for spike matching or other supervisory learning mechanisms. Local differences in dendritic activity at the synapse greatly affect the cross-correlation difference which determines the relative contributions of different neural activity sources. Dendritic activity due to nearby synapses, action potentials, both forward and back-propagating, as well as inhibitory synapses will dynamically modify the local activity at the synapse, and the resulting STDP learning rule. The dynamic Hebbian learning rule ensures furthermore, that the resulting synaptic strength is dynamically stable, and that interactions between synapses do not result in local instabilities. The rule clearly demonstrates that synapses function as independent localized

  10. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models

    Directory of Open Access Journals (Sweden)

    Alexander eHanuschkin

    2013-06-01

    Full Text Available Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: Random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, they allow for imitating arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions.Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird’s own song

  11. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models.

    Science.gov (United States)

    Hanuschkin, A; Ganguli, S; Hahnloser, R H R

    2013-01-01

    Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.

  12. Long-Term Homeostatic Properties Complementary to Hebbian Rules in CuPc-Based Multifunctional Memristor

    Science.gov (United States)

    Wang, Laiyuan; Wang, Zhiyong; Lin, Jinyi; Yang, Jie; Xie, Linghai; Yi, Mingdong; Li, Wen; Ling, Haifeng; Ou, Changjin; Huang, Wei

    2016-10-01

    Most simulations of neuroplasticity in memristors, which are potentially used to develop artificial synapses, are confined to the basic biological Hebbian rules. However, the simplex rules potentially can induce excessive excitation/inhibition, even collapse of neural activities, because they neglect the properties of long-term homeostasis involved in the frameworks of realistic neural networks. Here, we develop organic CuPc-based memristors of which excitatory and inhibitory conductivities can implement both Hebbian rules and homeostatic plasticity, complementary to Hebbian patterns and conductive to the long-term homeostasis. In another adaptive situation for homeostasis, in thicker samples, the overall excitement under periodic moderate stimuli tends to decrease and be recovered under intense inputs. Interestingly, the prototypes can be equipped with bio-inspired habituation and sensitization functions outperforming the conventional simplified algorithms. They mutually regulate each other to obtain the homeostasis. Therefore, we develop a novel versatile memristor with advanced synaptic homeostasis for comprehensive neural functions.

  13. Hebbian Learning is about contingency, not contiguity, and explains the emergence of predictive mirror neurons

    NARCIS (Netherlands)

    Keysers, C.; Perrett, David I; Gazzola, Valeria

    Hebbian Learning should not be reduced to contiguity, as it detects contingency and causality. Hebbian Learning accounts of mirror neurons make predictions that differ from associative learning: Through Hebbian Learning, mirror neurons become dynamic networks that calculate predictions and

  14. Anti-hebbian spike-timing-dependent plasticity and adaptive sensory processing.

    Science.gov (United States)

    Roberts, Patrick D; Leen, Todd K

    2010-01-01

    Adaptive sensory processing influences the central nervous system's interpretation of incoming sensory information. One of the functions of this adaptive sensory processing is to allow the nervous system to ignore predictable sensory information so that it may focus on important novel information needed to improve performance of specific tasks. The mechanism of spike-timing-dependent plasticity (STDP) has proven to be intriguing in this context because of its dual role in long-term memory and ongoing adaptation to maintain optimal tuning of neural responses. Some of the clearest links between STDP and adaptive sensory processing have come from in vitro, in vivo, and modeling studies of the electrosensory systems of weakly electric fish. Plasticity in these systems is anti-Hebbian, so that presynaptic inputs that repeatedly precede, and possibly could contribute to, a postsynaptic neuron's firing are weakened. The learning dynamics of anti-Hebbian STDP learning rules are stable if the timing relations obey strict constraints. The stability of these learning rules leads to clear predictions of how functional consequences can arise from the detailed structure of the plasticity. Here we review the connection between theoretical predictions and functional consequences of anti-Hebbian STDP, focusing on adaptive processing in the electrosensory system of weakly electric fish. After introducing electrosensory adaptive processing and the dynamics of anti-Hebbian STDP learning rules, we address issues of predictive sensory cancelation and novelty detection, descending control of plasticity, synaptic scaling, and optimal sensory tuning. We conclude with examples in other systems where these principles may apply.

  15. Anti-Hebbian Spike Timing Dependent Plasticity and Adaptive Sensory Processing

    Directory of Open Access Journals (Sweden)

    Patrick D Roberts

    2010-12-01

    Full Text Available Adaptive processing influences the central nervous system's interpretation of incoming sensory information. One of the functions of this adaptative sensory processing is to allow the nervous system to ignore predictable sensory information so that it may focus on important new information needed to improve performance of specific tasks. The mechanism of spike timing-dependent plasticity (STDP has proven to be intriguing in this context because of its dual role in long-term memory and ongoing adaptation to maintain optimal tuning of neural responses. Some of the clearest links between STDP and adaptive sensory processing have come from in vitro, in vivo, and modeling studies of the electrosensory systems of fish. Plasticity in such systems is anti-Hebbian, i.e. presynaptic inputs that repeatedly precede and hence could contribute to a postsynaptic neuron’s firing are weakened. The learning dynamics of anti-Hebbian STDP learning rules are stable if the timing relations obey strict constraints. The stability of these learning rules leads to clear predictions of how functional consequences can arise from the detailed structure of the plasticity. Here we review the connection between theoretical predictions and functional consequences of anti-Hebbian STDP, focusing on adaptive processing in the electrosensory system of weakly electric fish. After introducing electrosensory adaptive processing and the dynamics of anti-Hebbian STDP learning rules, we address issues of predictive sensory cancellation and novelty detection, descending control of plasticity, synaptic scaling, and optimal sensory tuning. We conclude with examples in other systems where these principles may apply.

  16. Demystifying social cognition : a Hebbian perspective

    NARCIS (Netherlands)

    Keysers, C; Perrett, DI

    For humans and monkeys, understanding the actions of others is central to survival. Here we review the physiological properties of three cortical areas involved in this capacity: the STS, PF and F5. Based on the anatomical connections of these areas, and the Hebbian learning rule, we propose a

  17. Hebbian Learning is about contingency, not contiguity, and explains the emergence of predictive mirror neurons.

    Science.gov (United States)

    Keysers, Christian; Perrett, David I; Gazzola, Valeria

    2014-04-01

    Hebbian Learning should not be reduced to contiguity, as it detects contingency and causality. Hebbian Learning accounts of mirror neurons make predictions that differ from associative learning: Through Hebbian Learning, mirror neurons become dynamic networks that calculate predictions and prediction errors and relate to ideomotor theories. The social force of imitation is important for mirror neuron emergence and suggests canalization.

  18. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system

    Science.gov (United States)

    Born, Jannis; Stringer, Simon M.

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

  19. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    Science.gov (United States)

    Born, Jannis; Galeazzi, Juan M; Stringer, Simon M

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

  20. Hebbian Learning is about contingency, not contiguity, and explains the emergence of predictive mirror neurons

    OpenAIRE

    Keysers, C.; Perrett, D.I.; Gazzola, V.

    2014-01-01

    Hebbian Learning should not be reduced to contiguity, as it detects contingency and causality. Hebbian Learning accounts of mirror neurons make predictions that differ from associative learning: Through Hebbian Learning, mirror neurons become dynamic networks that calculate predictions and prediction errors and relate to ideomotor theories. The social force of imitation is important for mirror neuron emergence and suggests canalization. Publisher PDF Peer reviewed

  1. Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Panda, Priyadarshini; Roy, Kaushik

    2017-01-01

    Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations.

  2. E-I balance emerges naturally from continuous Hebbian learning in autonomous neural networks.

    Science.gov (United States)

    Trapp, Philip; Echeveste, Rodrigo; Gros, Claudius

    2018-06-12

    Spontaneous brain activity is characterized in part by a balanced asynchronous chaotic state. Cortical recordings show that excitatory (E) and inhibitory (I) drivings in the E-I balanced state are substantially larger than the overall input. We show that such a state arises naturally in fully adapting networks which are deterministic, autonomously active and not subject to stochastic external or internal drivings. Temporary imbalances between excitatory and inhibitory inputs lead to large but short-lived activity bursts that stabilize irregular dynamics. We simulate autonomous networks of rate-encoding neurons for which all synaptic weights are plastic and subject to a Hebbian plasticity rule, the flux rule, that can be derived from the stationarity principle of statistical learning. Moreover, the average firing rate is regulated individually via a standard homeostatic adaption of the bias of each neuron's input-output non-linear function. Additionally, networks with and without short-term plasticity are considered. E-I balance may arise only when the mean excitatory and inhibitory weights are themselves balanced, modulo the overall activity level. We show that synaptic weight balance, which has been considered hitherto as given, naturally arises in autonomous neural networks when the here considered self-limiting Hebbian synaptic plasticity rule is continuously active.

  3. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    Directory of Open Access Journals (Sweden)

    Jannis Born

    Full Text Available A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior

  4. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex

    Science.gov (United States)

    Lindsay, Grace W.

    2017-01-01

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (

  5. Adaptive polymeric system for Hebbian type learning

    OpenAIRE

    2011-01-01

    Abstract We present the experimental realization of an adaptive polymeric system displaying a ?learning behaviour?. The system consists on a statistically organized networks of memristive elements (memory-resitors) based on polyaniline. In a such network the path followed by the current increments its conductivity, a property which makes the system able to mimic Hebbian type learning and have application in hardware neural networks. After discussing the working principle of ...

  6. Reward-Modulated Hebbian Plasticity as Leverage for Partially Embodied Control in Compliant Robotics

    Science.gov (United States)

    Burms, Jeroen; Caluwaerts, Ken; Dambre, Joni

    2015-01-01

    In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics. PMID:26347645

  7. Delta Learning Rule for the Active Sites Model

    OpenAIRE

    Lingashetty, Krishna Chaithanya

    2010-01-01

    This paper reports the results on methods of comparing the memory retrieval capacity of the Hebbian neural network which implements the B-Matrix approach, by using the Widrow-Hoff rule of learning. We then, extend the recently proposed Active Sites model by developing a delta rule to increase memory capacity. Also, this paper extends the binary neural network to a multi-level (non-binary) neural network.

  8. Real-time modeling of primitive environments through wavelet sensors and Hebbian learning

    Science.gov (United States)

    Vaccaro, James M.; Yaworsky, Paul S.

    1999-06-01

    Modeling the world through sensory input necessarily provides a unique perspective for the observer. Given a limited perspective, objects and events cannot always be encoded precisely but must involve crude, quick approximations to deal with sensory information in a real- time manner. As an example, when avoiding an oncoming car, a pedestrian needs to identify the fact that a car is approaching before ascertaining the model or color of the vehicle. In our methodology, we use wavelet-based sensors with self-organized learning to encode basic sensory information in real-time. The wavelet-based sensors provide necessary transformations while a rank-based Hebbian learning scheme encodes a self-organized environment through translation, scale and orientation invariant sensors. Such a self-organized environment is made possible by combining wavelet sets which are orthonormal, log-scale with linear orientation and have automatically generated membership functions. In earlier work we used Gabor wavelet filters, rank-based Hebbian learning and an exponential modulation function to encode textural information from images. Many different types of modulation are possible, but based on biological findings the exponential modulation function provided a good approximation of first spike coding of `integrate and fire' neurons. These types of Hebbian encoding schemes (e.g., exponential modulation, etc.) are useful for quick response and learning, provide several advantages over contemporary neural network learning approaches, and have been found to quantize data nonlinearly. By combining wavelets with Hebbian learning we can provide a real-time front-end for modeling an intelligent process, such as the autonomous control of agents in a simulated environment.

  9. Non-Hebbian learning implementation in light-controlled resistive memory devices.

    Science.gov (United States)

    Ungureanu, Mariana; Stoliar, Pablo; Llopis, Roger; Casanova, Fèlix; Hueso, Luis E

    2012-01-01

    Non-Hebbian learning is often encountered in different bio-organisms. In these processes, the strength of a synapse connecting two neurons is controlled not only by the signals exchanged between the neurons, but also by an additional factor external to the synaptic structure. Here we show the implementation of non-Hebbian learning in a single solid-state resistive memory device. The output of our device is controlled not only by the applied voltages, but also by the illumination conditions under which it operates. We demonstrate that our metal/oxide/semiconductor device learns more efficiently at higher applied voltages but also when light, an external parameter, is present during the information writing steps. Conversely, memory erasing is more efficiently at higher applied voltages and in the dark. Translating neuronal activity into simple solid-state devices could provide a deeper understanding of complex brain processes and give insight into non-binary computing possibilities.

  10. Behavioral analysis of differential Hebbian learning in closed-loop systems

    DEFF Research Database (Denmark)

    Kulvicius, Tomas; Kolodziejski, Christoph; Tamosiunaite, Minija

    2010-01-01

    Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 1950s, however, there have been only a few attempts which take into account learning, mostly...... measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning (STDP). In the first part we show that analytical solutions can be found...

  11. On Rationality of Decision Models Incorporating Emotion-Related Valuing and Hebbian Learning

    NARCIS (Netherlands)

    Treur, J.; Umair, M.

    2011-01-01

    In this paper an adaptive decision model based on predictive loops through feeling states is analysed from the perspective of rationality. Four different variations of Hebbian learning are considered for different types of connections in the decision model. To assess the extent of rationality, a

  12. Neuromodulated Spike-Timing-Dependent Plasticity and Theory of Three-Factor Learning Rules

    Directory of Open Access Journals (Sweden)

    Wulfram eGerstner

    2016-01-01

    Full Text Available Classical Hebbian learning puts the emphasis on joint pre- and postsynaptic activity, but neglects the potential role of neuromodulators. Since neuromodulators convey information about novelty or reward, the influence of neuromodulatorson synaptic plasticity is useful not just for action learning in classical conditioning, but also to decide 'when' to create new memories in response to a flow of sensory stimuli.In this review, we focus on timing requirements for pre- and postsynaptic activity in conjunction with one or several phasic neuromodulatory signals. While the emphasis of the text is on conceptual models and mathematical theories, we also discusssome experimental evidence for neuromodulation of Spike-Timing-Dependent Plasticity.We highlight the importance of synaptic mechanisms in bridging the temporal gap between sensory stimulation and neuromodulatory signals, and develop a framework for a class of neo-Hebbian three-factor learning rules that depend on presynaptic activity, postsynaptic variables as well as the influence of neuromodulators.

  13. Thermodynamic efficiency of learning a rule in neural networks

    Science.gov (United States)

    Goldt, Sebastian; Seifert, Udo

    2017-11-01

    Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.

  14. Integrating Hebbian and homeostatic plasticity: introduction.

    Science.gov (United States)

    Fox, Kevin; Stryker, Michael

    2017-03-05

    Hebbian plasticity is widely considered to be the mechanism by which information can be coded and retained in neurons in the brain. Homeostatic plasticity moves the neuron back towards its original state following a perturbation, including perturbations produced by Hebbian plasticity. How then does homeostatic plasticity avoid erasing the Hebbian coded information? To understand how plasticity works in the brain, and therefore to understand learning, memory, sensory adaptation, development and recovery from injury, requires development of a theory of plasticity that integrates both forms of plasticity into a whole. In April 2016, a group of computational and experimental neuroscientists met in London at a discussion meeting hosted by the Royal Society to identify the critical questions in the field and to frame the research agenda for the next steps. Here, we provide a brief introduction to the papers arising from the meeting and highlight some of the themes to have emerged from the discussions.This article is part of the themed issue 'Integrating Hebbian and homeostatic plasticity'. © 2017 The Author(s).

  15. Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Gabriele Scheler

    2017-10-01

    Full Text Available In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum, neurotransmitter (GABA (striatum or glutamate (cortex or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.

  16. Effects of arousal on cognitive control: empirical tests of the conflict-modulated Hebbian-learning hypothesis.

    Science.gov (United States)

    Brown, Stephen B R E; van Steenbergen, Henk; Kedar, Tomer; Nieuwenhuis, Sander

    2014-01-01

    An increasing number of empirical phenomena that were previously interpreted as a result of cognitive control, turn out to reflect (in part) simple associative-learning effects. A prime example is the proportion congruency effect, the finding that interference effects (such as the Stroop effect) decrease as the proportion of incongruent stimuli increases. While this was previously regarded as strong evidence for a global conflict monitoring-cognitive control loop, recent evidence has shown that the proportion congruency effect is largely item-specific and hence must be due to associative learning. The goal of our research was to test a recent hypothesis about the mechanism underlying such associative-learning effects, the conflict-modulated Hebbian-learning hypothesis, which proposes that the effect of conflict on associative learning is mediated by phasic arousal responses. In Experiment 1, we examined in detail the relationship between the item-specific proportion congruency effect and an autonomic measure of phasic arousal: task-evoked pupillary responses. In Experiment 2, we used a task-irrelevant phasic arousal manipulation and examined the effect on item-specific learning of incongruent stimulus-response associations. The results provide little evidence for the conflict-modulated Hebbian-learning hypothesis, which requires additional empirical support to remain tenable.

  17. Effects of arousal on cognitive control: Empirical tests of the conflict-modulated Hebbian-learning hypothesis

    Directory of Open Access Journals (Sweden)

    Stephen B.R.E. Brown

    2014-01-01

    Full Text Available An increasing number of empirical phenomena that were previously interpreted as a result of cognitive control, turn out to reflect (in part simple associative-learning effects. A prime example is the proportion congruency effect, the finding that interference effects (such as the Stroop effect decrease as the proportion of incongruent stimuli increases. While this was previously regarded as strong evidence for a global conflict monitoring-cognitive control loop, recent evidence has shown that the proportion congruency effect is largely item-specific and hence must be due to associative learning. The goal of our research was to test a recent hypothesis about the mechanism underlying such associative-learning effects, the conflict-modulated Hebbian-learning hypothesis, which proposes that the effect of conflict on associative learning is mediated by phasic arousal responses. In Experiment 1, we examined in detail the relationship between the item-specific proportion congruency effect and an autonomic measure of phasic arousal: task-evoked pupillary responses. In Experiment 2, we used a task-irrelevant phasic arousal manipulation and examined the effect on item-specific learning of incongruent stimulus-response associations. The results provide little evidence for the conflict-modulated Hebbian-learning hypothesis, which requires additional empirical support to remain tenable.

  18. Recruitment and Consolidation of Cell Assemblies for Words by Way of Hebbian Learning and Competition in a Multi-Layer Neural Network.

    Science.gov (United States)

    Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann

    2009-06-01

    Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly's halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support.

  19. Towards autonomous neuroprosthetic control using Hebbian reinforcement learning.

    Science.gov (United States)

    Mahmoudi, Babak; Pohlmeyer, Eric A; Prins, Noeline W; Geng, Shijia; Sanchez, Justin C

    2013-12-01

    Our goal was to design an adaptive neuroprosthetic controller that could learn the mapping from neural states to prosthetic actions and automatically adjust adaptation using only a binary evaluative feedback as a measure of desirability/undesirability of performance. Hebbian reinforcement learning (HRL) in a connectionist network was used for the design of the adaptive controller. The method combines the efficiency of supervised learning with the generality of reinforcement learning. The convergence properties of this approach were studied using both closed-loop control simulations and open-loop simulations that used primate neural data from robot-assisted reaching tasks. The HRL controller was able to perform classification and regression tasks using its episodic and sequential learning modes, respectively. In our experiments, the HRL controller quickly achieved convergence to an effective control policy, followed by robust performance. The controller also automatically stopped adapting the parameters after converging to a satisfactory control policy. Additionally, when the input neural vector was reorganized, the controller resumed adaptation to maintain performance. By estimating an evaluative feedback directly from the user, the HRL control algorithm may provide an efficient method for autonomous adaptation of neuroprosthetic systems. This method may enable the user to teach the controller the desired behavior using only a simple feedback signal.

  20. Criterion learning in rule-based categorization: simulation of neural mechanism and new data.

    Science.gov (United States)

    Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd

    2015-04-01

    In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Cocaine Promotes Coincidence Detection and Lowers Induction Threshold during Hebbian Associative Synaptic Potentiation in Prefrontal Cortex.

    Science.gov (United States)

    Ruan, Hongyu; Yao, Wei-Dong

    2017-01-25

    Addictive drugs usurp neural plasticity mechanisms that normally serve reward-related learning and memory, primarily by evoking changes in glutamatergic synaptic strength in the mesocorticolimbic dopamine circuitry. Here, we show that repeated cocaine exposure in vivo does not alter synaptic strength in the mouse prefrontal cortex during an early period of withdrawal, but instead modifies a Hebbian quantitative synaptic learning rule by broadening the temporal window and lowers the induction threshold for spike-timing-dependent LTP (t-LTP). After repeated, but not single, daily cocaine injections, t-LTP in layer V pyramidal neurons is induced at +30 ms, a normally ineffective timing interval for t-LTP induction in saline-exposed mice. This cocaine-induced, extended-timing t-LTP lasts for ∼1 week after terminating cocaine and is accompanied by an increased susceptibility to potentiation by fewer pre-post spike pairs, indicating a reduced t-LTP induction threshold. Basal synaptic strength and the maximal attainable t-LTP magnitude remain unchanged after cocaine exposure. We further show that the cocaine facilitation of t-LTP induction is caused by sensitized D1-cAMP/protein kinase A dopamine signaling in pyramidal neurons, which then pathologically recruits voltage-gated l-type Ca 2+ channels that synergize with GluN2A-containing NMDA receptors to drive t-LTP at extended timing. Our results illustrate a mechanism by which cocaine, acting on a key neuromodulation pathway, modifies the coincidence detection window during Hebbian plasticity to facilitate associative synaptic potentiation in prefrontal excitatory circuits. By modifying rules that govern activity-dependent synaptic plasticity, addictive drugs can derail the experience-driven neural circuit remodeling process important for executive control of reward and addiction. It is believed that addictive drugs often render an addict's brain reward system hypersensitive, leaving the individual more susceptible to

  2. A global bioheat model with self-tuning optimal regulation of body temperature using Hebbian feedback covariance learning.

    Science.gov (United States)

    Ong, M L; Ng, E Y K

    2005-12-01

    In the lower brain, body temperature is continually being regulated almost flawlessly despite huge fluctuations in ambient and physiological conditions that constantly threaten the well-being of the body. The underlying control problem defining thermal homeostasis is one of great enormity: Many systems and sub-systems are involved in temperature regulation and physiological processes are intrinsically complex and intertwined. Thus the defining control system has to take into account the complications of nonlinearities, system uncertainties, delayed feedback loops as well as internal and external disturbances. In this paper, we propose a self-tuning adaptive thermal controller based upon Hebbian feedback covariance learning where the system is to be regulated continually to best suit its environment. This hypothesis is supported in part by postulations of the presence of adaptive optimization behavior in biological systems of certain organisms which face limited resources vital for survival. We demonstrate the use of Hebbian feedback covariance learning as a possible self-adaptive controller in body temperature regulation. The model postulates an important role of Hebbian covariance adaptation as a means of reinforcement learning in the thermal controller. The passive system is based on a simplified 2-node core and shell representation of the body, where global responses are captured. Model predictions are consistent with observed thermoregulatory responses to conditions of exercise and rest, and heat and cold stress. An important implication of the model is that optimal physiological behaviors arising from self-tuning adaptive regulation in the thermal controller may be responsible for the departure from homeostasis in abnormal states, e.g., fever. This was previously unexplained using the conventional "set-point" control theory.

  3. Emotions as a Vehicle for Rationality: Rational Decision Making Models Based on Emotion-Related Valuing and Hebbian Learning

    NARCIS (Netherlands)

    Treur, J.; Umair, M.

    2015-01-01

    In this paper an adaptive decision model based on predictive loops through feeling states is analysed from the perspective of rationality. Hebbian learning is considered for different types of connections in the decision model. To assess the extent of rationality, a measure is introduced reflecting

  4. Learning-induced pattern classification in a chaotic neural network

    International Nuclear Information System (INIS)

    Li, Yang; Zhu, Ping; Xie, Xiaoping; He, Guoguang; Aihara, Kazuyuki

    2012-01-01

    In this Letter, we propose a Hebbian learning rule with passive forgetting (HLRPF) for use in a chaotic neural network (CNN). We then define the indices based on the Euclidean distance to investigate the evolution of the weights in a simplified way. Numerical simulations demonstrate that, under suitable external stimulations, the CNN with the proposed HLRPF acts as a fuzzy-like pattern classifier that performs much better than an ordinary CNN. The results imply relationship between learning and recognition. -- Highlights: ► Proposing a Hebbian learning rule with passive forgetting (HLRPF). ► Defining indices to investigate the evolution of the weights simply. ► The chaotic neural network with HLRPF acts as a fuzzy-like pattern classifier. ► The pattern classifier ability of the network is improved much.

  5. Adaptive WTA with an analog VLSI neuromorphic learning chip.

    Science.gov (United States)

    Häfliger, Philipp

    2007-03-01

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.

  6. Anti-Hebbian long-term potentiation in the hippocampal feedback inhibitory circuit.

    Science.gov (United States)

    Lamsa, Karri P; Heeroma, Joost H; Somogyi, Peter; Rusakov, Dmitri A; Kullmann, Dimitri M

    2007-03-02

    Long-term potentiation (LTP), which approximates Hebb's postulate of associative learning, typically requires depolarization-dependent glutamate receptors of the NMDA (N-methyl-D-aspartate) subtype. However, in some neurons, LTP depends instead on calcium-permeable AMPA-type receptors. This is paradoxical because intracellular polyamines block such receptors during depolarization. We report that LTP at synapses on hippocampal interneurons mediating feedback inhibition is "anti-Hebbian":Itis induced by presynaptic activity but prevented by postsynaptic depolarization. Anti-Hebbian LTP may occur in interneurons that are silent during periods of intense pyramidal cell firing, such as sharp waves, and lead to their altered activation during theta activity.

  7. A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation

    Science.gov (United States)

    Fiebig, Florian

    2017-01-01

    A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. SIGNIFICANCE STATEMENT Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and

  8. A re-examination of Hebbian-covariance rules and spike timing-dependent plasticity in cat visual cortex in vivo

    Directory of Open Access Journals (Sweden)

    Yves Frégnac

    2010-12-01

    Full Text Available Spike-Timing-Dependent Plasticity (STDP is considered as an ubiquitous rule for associative plasticity in cortical networks in vitro. However, limited supporting evidence for its functional role has been provided in vivo. In particular, there are very few studies demonstrating the co-occurence of synaptic efficiency changes and alteration of sensory responses in adult cortex during Hebbian or STDP protocols. We addressed this issue by reviewing and comparing the functional effects of two types of cellular conditioning in cat visual cortex. The first one, referred to as the covariance protocol, obeys a generalized Hebbian framework, by imposing, for different stimuli, supervised positive and negative changes in covariance between postsynaptic and presynaptic activity rates. The second protocol, based on intracellular recordings, replicated in vivo variants of the theta-burst paradigm (TBS, proven successful in inducing long-term potentiation (LTP in vitro. Since it was shown to impose a precise correlation delay between the electrically activated thalamic input and the TBS-induced postsynaptic spike, this protocol can be seen as a probe of causal (pre-before-post STDP. By choosing a thalamic region where the visual field representation was in retinotopic overlap with the intracellularly recorded cortical receptive field as the afferent site for supervised electrical stimulation, this protocol allowed to look for possible correlates between STDP and functional reorganization of the conditioned cortical receptive field. The rate-based covariance protocol induced significant and large amplitude changes in receptive field properties, in both kitten and adult V1 cortex. The TBS STDP-like protocol produced in the adult significant changes in the synaptic gain of the electrically activated thalamic pathway, but the statistical significance of the functional correlates was detectable mostly at the population level. Comparison of our observations with the

  9. A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation.

    Science.gov (United States)

    Fiebig, Florian; Lansner, Anders

    2017-01-04

    A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and neurophysiology of the underlying

  10. Syntactic sequencing in Hebbian cell assemblies.

    Science.gov (United States)

    Wennekers, Thomas; Palm, Günther

    2009-12-01

    Hebbian cell assemblies provide a theoretical framework for the modeling of cognitive processes that grounds them in the underlying physiological neural circuits. Recently we have presented an extension of cell assemblies by operational components which allows to model aspects of language, rules, and complex behaviour. In the present work we study the generation of syntactic sequences using operational cell assemblies timed by unspecific trigger signals. Syntactic patterns are implemented in terms of hetero-associative transition graphs in attractor networks which cause a directed flow of activity through the neural state space. We provide regimes for parameters that enable an unspecific excitatory control signal to switch reliably between attractors in accordance with the implemented syntactic rules. If several target attractors are possible in a given state, noise in the system in conjunction with a winner-takes-all mechanism can randomly choose a target. Disambiguation can also be guided by context signals or specific additional external signals. Given a permanently elevated level of external excitation the model can enter an autonomous mode, where it generates temporal grammatical patterns continuously.

  11. A Reinforcement Learning Framework for Spiking Networks with Dynamic Synapses

    Directory of Open Access Journals (Sweden)

    Karim El-Laithy

    2011-01-01

    Full Text Available An integration of both the Hebbian-based and reinforcement learning (RL rules is presented for dynamic synapses. The proposed framework permits the Hebbian rule to update the hidden synaptic model parameters regulating the synaptic response rather than the synaptic weights. This is performed using both the value and the sign of the temporal difference in the reward signal after each trial. Applying this framework, a spiking network with spike-timing-dependent synapses is tested to learn the exclusive-OR computation on a temporally coded basis. Reward values are calculated with the distance between the output spike train of the network and a reference target one. Results show that the network is able to capture the required dynamics and that the proposed framework can reveal indeed an integrated version of Hebbian and RL. The proposed framework is tractable and less computationally expensive. The framework is applicable to a wide class of synaptic models and is not restricted to the used neural representation. This generality, along with the reported results, supports adopting the introduced approach to benefit from the biologically plausible synaptic models in a wide range of intuitive signal processing.

  12. Analysis of ensemble learning using simple perceptrons based on online learning theory

    Science.gov (United States)

    Miyoshi, Seiji; Hara, Kazuyuki; Okada, Masato

    2005-03-01

    Ensemble learning of K nonlinear perceptrons, which determine their outputs by sign functions, is discussed within the framework of online learning and statistical mechanics. One purpose of statistical learning theory is to theoretically obtain the generalization error. This paper shows that ensemble generalization error can be calculated by using two order parameters, that is, the similarity between a teacher and a student, and the similarity among students. The differential equations that describe the dynamical behaviors of these order parameters are derived in the case of general learning rules. The concrete forms of these differential equations are derived analytically in the cases of three well-known rules: Hebbian learning, perceptron learning, and AdaTron (adaptive perceptron) learning. Ensemble generalization errors of these three rules are calculated by using the results determined by solving their differential equations. As a result, these three rules show different characteristics in their affinity for ensemble learning, that is “maintaining variety among students.” Results show that AdaTron learning is superior to the other two rules with respect to that affinity.

  13. q-state Potts-glass neural network based on pseudoinverse rule

    International Nuclear Information System (INIS)

    Xiong Daxing; Zhao Hong

    2010-01-01

    We study the q-state Potts-glass neural network with the pseudoinverse (PI) rule. Its performance is investigated and compared with that of the counterpart network with the Hebbian rule instead. We find that there exists a critical point of q, i.e., q cr =14, below which the storage capacity and the retrieval quality can be greatly improved by introducing the PI rule. We show that the dynamics of the neural networks constructed with the two learning rules respectively are quite different; but however, regardless of the learning rules, in the q-state Potts-glass neural networks with q≥3 there is a common novel dynamical phase in which the spurious memories are completely suppressed. This property has never been noticed in the symmetric feedback neural networks. Free from the spurious memories implies that the multistate Potts-glass neural networks would not be trapped in the metastable states, which is a favorable property for their applications.

  14. Hebbian Plasticity Guides Maturation of Glutamate Receptor Fields In Vivo

    Directory of Open Access Journals (Sweden)

    Dmitrij Ljaschenko

    2013-05-01

    Full Text Available Synaptic plasticity shapes the development of functional neural circuits and provides a basis for cellular models of learning and memory. Hebbian plasticity describes an activity-dependent change in synaptic strength that is input-specific and depends on correlated pre- and postsynaptic activity. Although it is recognized that synaptic activity and synapse development are intimately linked, our mechanistic understanding of the coupling is far from complete. Using Channelrhodopsin-2 to evoke activity in vivo, we investigated synaptic plasticity at the glutamatergic Drosophila neuromuscular junction. Remarkably, correlated pre- and postsynaptic stimulation increased postsynaptic sensitivity by promoting synapse-specific recruitment of GluR-IIA-type glutamate receptor subunits into postsynaptic receptor fields. Conversely, GluR-IIA was rapidly removed from synapses whose activity failed to evoke substantial postsynaptic depolarization. Uniting these results with developmental GluR-IIA dynamics provides a comprehensive physiological concept of how Hebbian plasticity guides synaptic maturation and sparse transmitter release controls the stabilization of the molecular composition of individual synapses.

  15. Spike-Based Bayesian-Hebbian Learning of Temporal Sequences

    DEFF Research Database (Denmark)

    Tully, Philip J; Lindén, Henrik; Hennig, Matthias H

    2016-01-01

    Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed...... in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods...

  16. Sleep: The hebbian reinforcement of the local inhibitory synapses.

    Science.gov (United States)

    Touzet, Claude

    2015-09-01

    Sleep is ubiquitous among the animal realm, and represents about 30% of our lives. Despite numerous efforts, the reason behind our need for sleep is still unknown. The Theory of neuronal Cognition (TnC) proposes that sleep is the period of time during which the local inhibitory synapses (in particular the cortical ones) are replenished. Indeed, as long as the active brain stays awake, hebbian learning guarantees that efficient inhibitory synapses lose their efficiency – just because they are efficient at avoiding the activation of the targeted neurons. Since hebbian learning is the only known mechanism of synapse modification, it follows that to replenish the inhibitory synapses' efficiency, source and targeted neurons must be activated together. This is achieved by a local depolarization that may travel (wave). The period of time during which such slow waves are experienced has been named the "slow-wave sleep" (SWS). It is cut into several pieces by shorter periods of paradoxical sleep (REM) which activity resembles that of the awake state. Indeed, SWS – because it only allows local neural activation – decreases the excitatory long distance connections strength. To avoid losing the associations built during the awake state, these long distance activations are played again during the REM sleep. REM and SWS sleeps act together to guarantee that when the subject awakes again, his inhibitory synaptic efficiency is restored and his (excitatory) long distance associations are still there. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Learning with three factors: modulating Hebbian plasticity with errors.

    Science.gov (United States)

    Kuśmierz, Łukasz; Isomura, Takuya; Toyoizumi, Taro

    2017-10-01

    Synaptic plasticity is a central theme in neuroscience. A framework of three-factor learning rules provides a powerful abstraction, helping to navigate through the abundance of models of synaptic plasticity. It is well-known that the dopamine modulation of learning is related to reward, but theoretical models predict other functional roles of the modulatory third factor; it may encode errors for supervised learning, summary statistics of the population activity for unsupervised learning or attentional feedback. Specialized structures may be needed in order to generate and propagate third factors in the neural network. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Hebbian learning in a model with dynamic rate-coded neurons: an alternative to the generative model approach for learning receptive fields from natural scenes.

    Science.gov (United States)

    Hamker, Fred H; Wiltschut, Jan

    2007-09-01

    Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.

  19. The dependence of neuronal encoding efficiency on Hebbian plasticity and homeostatic regulation of neurotransmitter release

    Science.gov (United States)

    Faghihi, Faramarz; Moustafa, Ahmed A.

    2015-01-01

    Synapses act as information filters by different molecular mechanisms including retrograde messenger that affect neuronal spiking activity. One of the well-known effects of retrograde messenger in presynaptic neurons is a change of the probability of neurotransmitter release. Hebbian learning describe a strengthening of a synapse between a presynaptic input onto a postsynaptic neuron when both pre- and postsynaptic neurons are coactive. In this work, a theory of homeostatic regulation of neurotransmitter release by retrograde messenger and Hebbian plasticity in neuronal encoding is presented. Encoding efficiency was measured for different synaptic conditions. In order to gain high encoding efficiency, the spiking pattern of a neuron should be dependent on the intensity of the input and show low levels of noise. In this work, we represent spiking trains as zeros and ones (corresponding to non-spike or spike in a time bin, respectively) as words with length equal to three. Then the frequency of each word (here eight words) is measured using spiking trains. These frequencies are used to measure neuronal efficiency in different conditions and for different parameter values. Results show that neurons that have synapses acting as band-pass filters show the highest efficiency to encode their input when both Hebbian mechanism and homeostatic regulation of neurotransmitter release exist in synapses. Specifically, the integration of homeostatic regulation of feedback inhibition with Hebbian mechanism and homeostatic regulation of neurotransmitter release in the synapses leads to even higher efficiency when high stimulus intensity is presented to the neurons. However, neurons with synapses acting as high-pass filters show no remarkable increase in encoding efficiency for all simulated synaptic plasticity mechanisms. This study demonstrates the importance of cooperation of Hebbian mechanism with regulation of neurotransmitter release induced by rapid diffused retrograde

  20. The dependence of neuronal encoding efficiency on Hebbian plasticity and homeostatic regulation of neurotransmitter release

    Directory of Open Access Journals (Sweden)

    Faramarz eFaghihi

    2015-04-01

    Full Text Available Synapses act as information filters by different molecular mechanisms including retrograde messenger that affect neuronal spiking activity. One of the well-known effects of retrograde messenger in presynaptic neurons is a change of the probability of neurotransmitter release. Hebbian learning describe a strengthening of a synapse between a presynaptic input onto a postsynaptic neuron when both pre- and postsynaptic neurons are coactive. In this work, a theory of homeostatic regulation of neurotransmitter release by retrograde messenger and Hebbian plasticity in neuronal encoding is presented. Encoding efficiency was measured for different synaptic conditions. In order to gain high encoding efficiency, the spiking pattern of a neuron should be dependent on the intensity of the input and show low levels of noise. In this work, we represent spiking trains as zeros and ones (corresponding to non-spike or spike in a time bin, respectively as words with length equal to three. Then the frequency of each word (here eight words is measured using spiking trains. These frequencies are used to measure neuronal efficiency in different conditions and for different parameter values. Results show that neurons that have synapses acting as band-pass filters show the highest efficiency to encode their input when both Hebbian mechanism and homeostatic regulation of neurotransmitter release exist in synapses. Specifically, the integration of homeostatic regulation of feedback inhibition with Hebbian mechanism and homeostatic regulation of neurotransmitter release in the synapses leads to even higher efficiency when high stimulus intensity is presented to the neurons. However, neurons with synapses acting as high-pass filters show no remarkable increase in encoding efficiency for all simulated synaptic plasticity mechanisms.

  1. Competitive STDP Learning of Overlapping Spatial Patterns.

    Science.gov (United States)

    Krunglevicius, Dalius

    2015-08-01

    Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly (i.e., patterns are mutually inclusive), however, competition would not preclude trained neuron's responding to a new pattern and adjusting synaptic weights accordingly. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor. This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron's forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.

  2. The effect of STDP temporal kernel structure on the learning dynamics of single excitatory and inhibitory synapses.

    Directory of Open Access Journals (Sweden)

    Yotam Luz

    Full Text Available Spike-Timing Dependent Plasticity (STDP is characterized by a wide range of temporal kernels. However, much of the theoretical work has focused on a specific kernel - the "temporally asymmetric Hebbian" learning rules. Previous studies linked excitatory STDP to positive feedback that can account for the emergence of response selectivity. Inhibitory plasticity was associated with negative feedback that can balance the excitatory and inhibitory inputs. Here we study the possible computational role of the temporal structure of the STDP. We represent the STDP as a superposition of two processes: potentiation and depression. This allows us to model a wide range of experimentally observed STDP kernels, from Hebbian to anti-Hebbian, by varying a single parameter. We investigate STDP dynamics of a single excitatory or inhibitory synapse in purely feed-forward architecture. We derive a mean-field-Fokker-Planck dynamics for the synaptic weight and analyze the effect of STDP structure on the fixed points of the mean field dynamics. We find a phase transition along the Hebbian to anti-Hebbian parameter from a phase that is characterized by a unimodal distribution of the synaptic weight, in which the STDP dynamics is governed by negative feedback, to a phase with positive feedback characterized by a bimodal distribution. The critical point of this transition depends on general properties of the STDP dynamics and not on the fine details. Namely, the dynamics is affected by the pre-post correlations only via a single number that quantifies its overlap with the STDP kernel. We find that by manipulating the STDP temporal kernel, negative feedback can be induced in excitatory synapses and positive feedback in inhibitory. Moreover, there is an exact symmetry between inhibitory and excitatory plasticity, i.e., for every STDP rule of inhibitory synapse there exists an STDP rule for excitatory synapse, such that their dynamics is identical.

  3. Logic Learning in Hopfield Networks

    OpenAIRE

    Sathasivam, Saratha; Abdullah, Wan Ahmad Tajuddin Wan

    2008-01-01

    Synaptic weights for neurons in logic programming can be calculated either by using Hebbian learning or by Wan Abdullah's method. In other words, Hebbian learning for governing events corresponding to some respective program clauses is equivalent with learning using Wan Abdullah's method for the same respective program clauses. In this paper we will evaluate experimentally the equivalence between these two types of learning through computer simulations.

  4. Spike-Based Bayesian-Hebbian Learning of Temporal Sequences.

    Directory of Open Access Journals (Sweden)

    Philip J Tully

    2016-05-01

    Full Text Available Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model's feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx. We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison.

  5. On-line learning through simple perceptron learning with a margin.

    Science.gov (United States)

    Hara, Kazuyuki; Okada, Masato

    2004-03-01

    We analyze a learning method that uses a margin kappa a la Gardner for simple perceptron learning. This method corresponds to the perceptron learning when kappa = 0 and to the Hebbian learning when kappa = infinity. Nevertheless, we found that the generalization ability of the method was superior to that of the perceptron and the Hebbian methods at an early stage of learning. We analyzed the asymptotic property of the learning curve of this method through computer simulation and found that it was the same as for perceptron learning. We also investigated an adaptive margin control method.

  6. Circuit mechanisms of sensorimotor learning

    Science.gov (United States)

    Makino, Hiroshi; Hwang, Eun Jung; Hedrick, Nathan G.; Komiyama, Takaki

    2016-01-01

    SUMMARY The relationship between the brain and the environment is flexible, forming the foundation for our ability to learn. Here we review the current state of our understanding of the modifications in the sensorimotor pathway related to sensorimotor learning. We divide the process in three hierarchical levels with distinct goals: 1) sensory perceptual learning, 2) sensorimotor associative learning, and 3) motor skill learning. Perceptual learning optimizes the representations of important sensory stimuli. Associative learning and the initial phase of motor skill learning are ensured by feedback-based mechanisms that permit trial-and-error learning. The later phase of motor skill learning may primarily involve feedback-independent mechanisms operating under the classic Hebbian rule. With these changes under distinct constraints and mechanisms, sensorimotor learning establishes dedicated circuitry for the reproduction of stereotyped neural activity patterns and behavior. PMID:27883902

  7. Evidence from a rare case-study for Hebbian-like changes in structural connectivity induced by long-term deep brain stimulation

    Directory of Open Access Journals (Sweden)

    Tim J Van Hartevelt

    2015-06-01

    Full Text Available It is unclear whether Hebbian-like learning occurs at the level of long-range white matter connections in humans, i.e. where measurable changes in structural connectivity are correlated with changes in functional connectivity. However, the behavioral changes observed after deep brain stimulation (DBS suggest the existence of such Hebbian-like mechanisms occurring at the structural level with functional consequences. In this rare case study, we obtained the full network of white matter connections of one patient with Parkinson's disease before and after long-term DBS and combined it with a computational model of ongoing activity to investigate the effects of DBS-induced long-term structural changes. The results show that the long-term effects of DBS on resting-state functional connectivity is best obtained in the computational model by changing the structural weights from the subthalamic nucleus to the putamen and the thalamus in a Hebbian-like manner. Moreover, long-term DBS also significantly changed the structural connectivity towards normality in terms of model-based measures of segregation and integration of information processing, two key concepts of brain organization. This novel approach using computational models to model the effects of Hebbian-like changes in structural connectivity allowed us to causally identify the possible underlying neural mechanisms of long-term DBS using rare case study data. In time, this could help predict the efficacy of individual DBS targeting and identify novel DBS targets.

  8. Optical implementations of associative networks with versatile adaptive learning capabilities.

    Science.gov (United States)

    Fisher, A D; Lippincott, W L; Lee, J N

    1987-12-01

    Optical associative, parallel-processing architectures are being developed using a multimodule approach, where a number of smaller, adaptive, associative modules are nonlinearly interconnected and cascaded under the guidance of a variety of organizational principles to structure larger architectures for solving specific problems. A number of novel optical implementations with versatile adaptive learning capabilities are presented for the individual associative modules, including holographic configurations and five specific electrooptic configurations. The practical issues involved in real optical architectures are analyzed, and actual laboratory optical implementations of associative modules based on Hebbian and Widrow-Hoff learning rules are discussed, including successful experimental demonstrations of their operation.

  9. On-line learning through simple perceptron with a margin

    OpenAIRE

    Hara, Kazuyuki; Okada, Masato

    2003-01-01

    We analyze a learning method that uses a margin $\\kappa$ {\\it a la} Gardner for simple perceptron learning. This method corresponds to the perceptron learning when $\\kappa=0$, and to the Hebbian learning when $\\kappa \\to \\infty$. Nevertheless, we found that the generalization ability of the method was superior to that of the perceptron and the Hebbian methods at an early stage of learning. We analyzed the asymptotic property of the learning curve of this method through computer simulation and...

  10. Homeostatic role of heterosynaptic plasticity: Models and experiments

    Directory of Open Access Journals (Sweden)

    Marina eChistiakova

    2015-07-01

    Full Text Available Homosynaptic Hebbian-type plasticity provides a cellular mechanism of learning and refinement of connectivity during development in a variety of biological systems. In this review we argue that a complimentary form of plasticity - heterosynaptic plasticity - represents a necessary cellular component for homeostatic regulation of synaptic weights and neuronal activity. The required properties of a homeostatic mechanism which acutely constrains the runaway dynamics imposed by Hebbian associative plasticity have been well-articulated by theoretical and modeling studies. Such mechanism(s should robustly support the stability of operation of neuronal networks and synaptic competition, include changes at non-active synapses, and operate on a similar time scale to Hebbian-type plasticity. The experimentally observed properties of heterosynaptic plasticity have introduced it as a strong candidate to fulfill this homeostatic role. Subsequent modeling studies which incorporate heterosynaptic plasticity into model neurons with Hebbian synapses (utilizing an STDP learning rule have confirmed its ability to robustly provide stability and competition. In contrast, properties of homeostatic synaptic scaling, which is triggered by extreme and long lasting (hours and days changes of neuronal activity, do not fit two crucial requirements for a hypothetical homeostatic mechanism needed to provide stability of operation in the face of on-going synaptic changes driven by Hebbian-type learning rules. Both the trigger and the time scale of homeostatic synaptic scaling are fundamentally different from those of the Hebbian-type plasticity. We conclude that heterosynaptic plasticity, which is triggered by the same episodes of strong postsynaptic activity and operates on the same time scale as Hebbian-type associative plasticity, is ideally suited to serve homeostatic role during on-going synaptic plasticity.

  11. View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation

    Science.gov (United States)

    Leibo, Joel Z.; Liao, Qianli; Freiwald, Winrich A.; Anselmi, Fabio; Poggio, Tomaso

    2017-01-01

    SUMMARY The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations like depth-rotations [1, 2]. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3, 4, 5, 6]. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here we demonstrate that one specific biologically-plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli like faces at intermediate levels of the architecture and show why it does so. Thus the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. PMID:27916522

  12. Binding and segmentation via a neural mass model trained with Hebbian and anti-Hebbian mechanisms.

    Science.gov (United States)

    Cona, Filippo; Zavaglia, Melissa; Ursino, Mauro

    2012-04-01

    Synchronization of neural activity in the gamma band, modulated by a slower theta rhythm, is assumed to play a significant role in binding and segmentation of multiple objects. In the present work, a recent neural mass model of a single cortical column is used to analyze the synaptic mechanisms which can warrant synchronization and desynchronization of cortical columns, during an autoassociation memory task. The model considers two distinct layers communicating via feedforward connections. The first layer receives the external input and works as an autoassociative network in the theta band, to recover a previously memorized object from incomplete information. The second realizes segmentation of different objects in the gamma band. To this end, units within both layers are connected with synapses trained on the basis of previous experience to store objects. The main model assumptions are: (i) recovery of incomplete objects is realized by excitatory synapses from pyramidal to pyramidal neurons in the same object; (ii) binding in the gamma range is realized by excitatory synapses from pyramidal neurons to fast inhibitory interneurons in the same object. These synapses (both at points i and ii) have a few ms dynamics and are trained with a Hebbian mechanism. (iii) Segmentation is realized with faster AMPA synapses, with rise times smaller than 1 ms, trained with an anti-Hebbian mechanism. Results show that the model, with the previous assumptions, can correctly reconstruct and segment three simultaneous objects, starting from incomplete knowledge. Segmentation of more objects is possible but requires an increased ratio between the theta and gamma periods.

  13. Learning with incomplete information in the committee machine.

    Science.gov (United States)

    Bergmann, Urs M; Kühn, Reimer; Stamatescu, Ion-Olimpiu

    2009-12-01

    We study the problem of learning with incomplete information in a student-teacher setup for the committee machine. The learning algorithm combines unsupervised Hebbian learning of a series of associations with a delayed reinforcement step, in which the set of previously learnt associations is partly and indiscriminately unlearnt, to an extent that depends on the success rate of the student on these previously learnt associations. The relevant learning parameter lambda represents the strength of Hebbian learning. A coarse-grained analysis of the system yields a set of differential equations for overlaps of student and teacher weight vectors, whose solutions provide a complete description of the learning behavior. It reveals complicated dynamics showing that perfect generalization can be obtained if the learning parameter exceeds a threshold lambda ( c ), and if the initial value of the overlap between student and teacher weights is non-zero. In case of convergence, the generalization error exhibits a power law decay as a function of the number of examples used in training, with an exponent that depends on the parameter lambda. An investigation of the system flow in a subspace with broken permutation symmetry between hidden units reveals a bifurcation point lambda* above which perfect generalization does not depend on initial conditions. Finally, we demonstrate that cases of a complexity mismatch between student and teacher are optimally resolved in the sense that an over-complex student can emulate a less complex teacher rule, while an under-complex student reaches a state which realizes the minimal generalization error compatible with the complexity mismatch.

  14. Learning in AN Oscillatory Cortical Model

    Science.gov (United States)

    Scarpetta, Silvia; Li, Zhaoping; Hertz, John

    We study a model of generalized-Hebbian learning in asymmetric oscillatory neural networks modeling cortical areas such as hippocampus and olfactory cortex. The learning rule is based on the synaptic plasticity observed experimentally, in particular long-term potentiation and long-term depression of the synaptic efficacies depending on the relative timing of the pre- and postsynaptic activities during learning. The learned memory or representational states can be encoded by both the amplitude and the phase patterns of the oscillating neural populations, enabling more efficient and robust information coding than in conventional models of associative memory or input representation. Depending on the class of nonlinearity of the activation function, the model can function as an associative memory for oscillatory patterns (nonlinearity of class II) or can generalize from or interpolate between the learned states, appropriate for the function of input representation (nonlinearity of class I). In the former case, simulations of the model exhibits a first order transition between the "disordered state" and the "ordered" memory state.

  15. Rule based systems for big data a machine learning approach

    CERN Document Server

    Liu, Han; Cocea, Mihaela

    2016-01-01

    The ideas introduced in this book explore the relationships among rule based systems, machine learning and big data. Rule based systems are seen as a special type of expert systems, which can be built by using expert knowledge or learning from real data. The book focuses on the development and evaluation of rule based systems in terms of accuracy, efficiency and interpretability. In particular, a unified framework for building rule based systems, which consists of the operations of rule generation, rule simplification and rule representation, is presented. Each of these operations is detailed using specific methods or techniques. In addition, this book also presents some ensemble learning frameworks for building ensemble rule based systems.

  16. Students’ Learning Obstacles and Alternative Solution in Counting Rules Learning Levels Senior High School

    Directory of Open Access Journals (Sweden)

    M A Jatmiko

    2017-12-01

    Full Text Available The counting rules is a topic in mathematics senior high school. In the learning process, teachers often find students who have difficulties in learning this topic. Knowing the characteristics of students' learning difficulties and analyzing the causes is important for the teacher, as an effort in trying to reflect the learning process and as a reference in constructing alternative learning solutions which appropriate to anticipate students’ learning obstacles. This study uses qualitative methods and involves 70 students of class XII as research subjects. The data collection techniques used in this study is diagnostic test instrument about learning difficulties in counting rules, observation, and interview. The data used to know the learning difficulties experienced by students, the causes of learning difficulties, and to develop alternative learning solutions. From the results of data analysis, the results of diagnostic tests researcher found some obstacles faced by students, such as students get confused in describing the definition, students difficulties in understanding the procedure of solving multiplication rules. Based on those problems, researcher analyzed the causes of these difficulties and make hypothetical learning trajectory as an alternative solution in counting rules learning.

  17. Learning invariance from natural images inspired by observations in the primary visual cortex.

    Science.gov (United States)

    Teichmann, Michael; Wiltschut, Jan; Hamker, Fred

    2012-05-01

    The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.

  18. Recommendation System Based On Association Rules For Distributed E-Learning Management Systems

    Science.gov (United States)

    Mihai, Gabroveanu

    2015-09-01

    Traditional Learning Management Systems are installed on a single server where learning materials and user data are kept. To increase its performance, the Learning Management System can be installed on multiple servers; learning materials and user data could be distributed across these servers obtaining a Distributed Learning Management System. In this paper is proposed the prototype of a recommendation system based on association rules for Distributed Learning Management System. Information from LMS databases is analyzed using distributed data mining algorithms in order to extract the association rules. Then the extracted rules are used as inference rules to provide personalized recommendations. The quality of provided recommendations is improved because the rules used to make the inferences are more accurate, since these rules aggregate knowledge from all e-Learning systems included in Distributed Learning Management System.

  19. Concurrence of rule- and similarity-based mechanisms in artificial grammar learning.

    Science.gov (United States)

    Opitz, Bertram; Hofmann, Juliane

    2015-03-01

    A current theoretical debate regards whether rule-based or similarity-based learning prevails during artificial grammar learning (AGL). Although the majority of findings are consistent with a similarity-based account of AGL it has been argued that these results were obtained only after limited exposure to study exemplars, and performance on subsequent grammaticality judgment tests has often been barely above chance level. In three experiments the conditions were investigated under which rule- and similarity-based learning could be applied. Participants were exposed to exemplars of an artificial grammar under different (implicit and explicit) learning instructions. The analysis of receiver operating characteristics (ROC) during a final grammaticality judgment test revealed that explicit but not implicit learning led to rule knowledge. It also demonstrated that this knowledge base is built up gradually while similarity knowledge governed the initial state of learning. Together these results indicate that rule- and similarity-based mechanisms concur during AGL. Moreover, it could be speculated that two different rule processes might operate in parallel; bottom-up learning via gradual rule extraction and top-down learning via rule testing. Crucially, the latter is facilitated by performance feedback that encourages explicit hypothesis testing. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Learning general phonological rules from distributional information: a computational model.

    Science.gov (United States)

    Calamaro, Shira; Jarosz, Gaja

    2015-04-01

    Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony (Peperkamp, Le Calvez, Nadal, & Dupoux, 2006). This paper extends the model to account for learning of a broader set of phonological alternations and the formalization of these alternations as general rules. In Experiment 1, we apply the original model to new data in Dutch and demonstrate its limitations in learning nonallophonic rules. In Experiment 2, we extend the model to allow it to learn general rules for alternations that apply to a class of segments. In Experiment 3, the model is further extended to allow for generalization by context; we argue that this generalization must be constrained by linguistic principles. Copyright © 2014 Cognitive Science Society, Inc.

  1. How synapses can enhance sensibility of a neural network

    Science.gov (United States)

    Protachevicz, P. R.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Baptista, M. S.; Viana, R. L.; Lameu, E. L.; Macau, E. E. N.; Batista, A. M.

    2018-02-01

    In this work, we study the dynamic range in a neural network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. The learning rules are related to neuroplasticity that describes change to the neural connections in the brain. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.

  2. Rule learning in autism: the role of reward type and social context.

    Science.gov (United States)

    Jones, E J H; Webb, S J; Estes, A; Dawson, G

    2013-01-01

    Learning abstract rules is central to social and cognitive development. Across two experiments, we used Delayed Non-Matching to Sample tasks to characterize the longitudinal development and nature of rule-learning impairments in children with Autism Spectrum Disorder (ASD). Results showed that children with ASD consistently experienced more difficulty learning an abstract rule from a discrete physical reward than children with DD. Rule learning was facilitated by the provision of more concrete reinforcement, suggesting an underlying difficulty in forming conceptual connections. Learning abstract rules about social stimuli remained challenging through late childhood, indicating the importance of testing executive functions in both social and non-social contexts.

  3. Rule Learning in Autism: The Role of Reward Type and Social Context

    OpenAIRE

    Jones, E. J. H.; Webb, S. J.; Estes, A.; Dawson, G.

    2013-01-01

    Learning abstract rules is central to social and cognitive development. Across two experiments, we used Delayed Non-Matching to Sample tasks to characterize the longitudinal development and nature of rule-learning impairments in children with Autism Spectrum Disorder (ASD). Results showed that children with ASD consistently experienced more difficulty learning an abstract rule from a discrete physical reward than children with DD. Rule learning was facilitated by the provision of more concret...

  4. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    Science.gov (United States)

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  5. A self-learning rule base for command following in dynamical systems

    Science.gov (United States)

    Tsai, Wei K.; Lee, Hon-Mun; Parlos, Alexander

    1992-01-01

    In this paper, a self-learning Rule Base for command following in dynamical systems is presented. The learning is accomplished though reinforcement learning using an associative memory called SAM. The main advantage of SAM is that it is a function approximator with explicit storage of training samples. A learning algorithm patterned after the dynamic programming is proposed. Two artificially created, unstable dynamical systems are used for testing, and the Rule Base was used to generate a feedback control to improve the command following ability of the otherwise uncontrolled systems. The numerical results are very encouraging. The controlled systems exhibit a more stable behavior and a better capability to follow reference commands. The rules resulting from the reinforcement learning are explicitly stored and they can be modified or augmented by human experts. Due to overlapping storage scheme of SAM, the stored rules are similar to fuzzy rules.

  6. Genetic attack on neural cryptography.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido

    2006-03-01

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.

  7. Genetic attack on neural cryptography

    International Nuclear Information System (INIS)

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido

    2006-01-01

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size

  8. Genetic attack on neural cryptography

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido

    2006-03-01

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.

  9. Habit learning and brain-machine interfaces (BMI): a tribute to Valentino Braitenberg's "Vehicles".

    Science.gov (United States)

    Birbaumer, Niels; Hummel, Friedhelm C

    2014-10-01

    Brain-Machine Interfaces (BMI) allow manipulation of external devices and computers directly with brain activity without involvement of overt motor actions. The neurophysiological principles of such robotic brain devices and BMIs follow Hebbian learning rules as described and realized by Valentino Braitenberg in his book "Vehicles," in the concept of a "thought pump" residing in subcortical basal ganglia structures. We describe here the application of BMIs for brain communication in totally locked-in patients and argue that the thought pump may extinguish-at least partially-in those people because of extinction of instrumentally learned cognitive responses and brain responses. We show that Pavlovian semantic conditioning may allow brain communication even in the completely paralyzed who does not show response-effect contingencies. Principles of skill learning and habit acquisition as formulated by Braitenberg are the building blocks of BMIs and neuroprostheses.

  10. A rule-learning program in high energy physics event classification

    International Nuclear Information System (INIS)

    Clearwater, S.H.; Stern, E.G.

    1991-01-01

    We have applied a rule-learning program to the problem of event classification in high energy physics. The program searches for event classifications, i.e. rules, and effectively allows an exploration of many more possible classifications than is practical by a physicist. The program, RL4, is particularly useful because it can easily explore multi-dimensional rules as well as rules that may seem non-intuitive at first to the physicist. RL4 is also contrasted with other learning programs. (orig.)

  11. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach

  12. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.

  13. Rule induction performance in amnestic mild cognitive impairment and Alzheimer's dementia: examining the role of simple and biconditional rule learning processes.

    Science.gov (United States)

    Oosterman, Joukje M; Heringa, Sophie M; Kessels, Roy P C; Biessels, Geert Jan; Koek, Huiberdina L; Maes, Joseph H R; van den Berg, Esther

    2017-04-01

    Rule induction tests such as the Wisconsin Card Sorting Test require executive control processes, but also the learning and memorization of simple stimulus-response rules. In this study, we examined the contribution of diminished learning and memorization of simple rules to complex rule induction test performance in patients with amnestic mild cognitive impairment (aMCI) or Alzheimer's dementia (AD). Twenty-six aMCI patients, 39 AD patients, and 32 control participants were included. A task was used in which the memory load and the complexity of the rules were independently manipulated. This task consisted of three conditions: a simple two-rule learning condition (Condition 1), a simple four-rule learning condition (inducing an increase in memory load, Condition 2), and a complex biconditional four-rule learning condition-inducing an increase in complexity and, hence, executive control load (Condition 3). Performance of AD patients declined disproportionately when the number of simple rules that had to be memorized increased (from Condition 1 to 2). An additional increment in complexity (from Condition 2 to 3) did not, however, disproportionately affect performance of the patients. Performance of the aMCI patients did not differ from that of the control participants. In the patient group, correlation analysis showed that memory performance correlated with Condition 1 performance, whereas executive task performance correlated with Condition 2 performance. These results indicate that the reduced learning and memorization of underlying task rules explains a significant part of the diminished complex rule induction performance commonly reported in AD, although results from the correlation analysis suggest involvement of executive control functions as well. Taken together, these findings suggest that care is needed when interpreting rule induction task performance in terms of executive function deficits in these patients.

  14. Learning a New Selection Rule in Visual and Frontal Cortex.

    Science.gov (United States)

    van der Togt, Chris; Stănişor, Liviu; Pooresmaeili, Arezoo; Albantakis, Larissa; Deco, Gustavo; Roelfsema, Pieter R

    2016-08-01

    How do you make a decision if you do not know the rules of the game? Models of sensory decision-making suggest that choices are slow if evidence is weak, but they may only apply if the subject knows the task rules. Here, we asked how the learning of a new rule influences neuronal activity in the visual (area V1) and frontal cortex (area FEF) of monkeys. We devised a new icon-selection task. On each day, the monkeys saw 2 new icons (small pictures) and learned which one was relevant. We rewarded eye movements to a saccade target connected to the relevant icon with a curve. Neurons in visual and frontal cortex coded the monkey's choice, because the representation of the selected curve was enhanced. Learning delayed the neuronal selection signals and we uncovered the cause of this delay in V1, where learning to select the relevant icon caused an early suppression of surrounding image elements. These results demonstrate that the learning of a new rule causes a transition from fast and random decisions to a more considerate strategy that takes additional time and they reveal the contribution of visual and frontal cortex to the learning process. © The Author 2016. Published by Oxford University Press.

  15. Theta coordinated error-driven learning in the hippocampus.

    Directory of Open Access Journals (Sweden)

    Nicholas Ketz

    Full Text Available The learning mechanism in the hippocampus has almost universally been assumed to be Hebbian in nature, where individual neurons in an engram join together with synaptic weight increases to support facilitated recall of memories later. However, it is also widely known that Hebbian learning mechanisms impose significant capacity constraints, and are generally less computationally powerful than learning mechanisms that take advantage of error signals. We show that the differential phase relationships of hippocampal subfields within the overall theta rhythm enable a powerful form of error-driven learning, which results in significantly greater capacity, as shown in computer simulations. In one phase of the theta cycle, the bidirectional connectivity between CA1 and entorhinal cortex can be trained in an error-driven fashion to learn to effectively encode the cortical inputs in a compact and sparse form over CA1. In a subsequent portion of the theta cycle, the system attempts to recall an existing memory, via the pathway from entorhinal cortex to CA3 and CA1. Finally the full theta cycle completes when a strong target encoding representation of the current input is imposed onto the CA1 via direct projections from entorhinal cortex. The difference between this target encoding and the attempted recall of the same representation on CA1 constitutes an error signal that can drive the learning of CA3 to CA1 synapses. This CA3 to CA1 pathway is critical for enabling full reinstatement of recalled hippocampal memories out in cortex. Taken together, these new learning dynamics enable a much more robust, high-capacity model of hippocampal learning than was available previously under the classical Hebbian model.

  16. Ontology-based concept map learning path reasoning system using SWRL rules

    Energy Technology Data Exchange (ETDEWEB)

    Chu, K.-K.; Lee, C.-I. [National Univ. of Tainan, Taiwan (China). Dept. of Computer Science and Information Learning Technology

    2010-08-13

    Concept maps are graphical representations of knowledge. Concept mapping may reduce students' cognitive load and extend simple memory function. The purpose of this study was on the diagnosis of students' concept map learning abilities and the provision of personally constructive advice dependant on their learning path and progress. Ontology is a useful method with which to represent and store concept map information. Semantic web rule language (SWRL) rules are easy to understand and to use as specific reasoning services. This paper discussed the selection of grade 7 lakes and rivers curriculum for which to devise a concept map learning path reasoning service. The paper defined a concept map e-learning ontology and two SWRL semantic rules, and collected users' concept map learning path data to infer implicit knowledge and to recommend the next learning path for users. It was concluded that the designs devised in this study were feasible and advanced and the ontology kept the domain knowledge preserved. SWRL rules identified an abstraction model for inferred properties. Since they were separate systems, they did not interfere with each other, while ontology or SWRL rules were maintained, ensuring persistent system extensibility and robustness. 15 refs., 1 tab., 8 figs.

  17. An Efficient Inductive Genetic Learning Algorithm for Fuzzy Relational Rules

    Directory of Open Access Journals (Sweden)

    Antonio

    2012-04-01

    Full Text Available Fuzzy modelling research has traditionally focused on certain types of fuzzy rules. However, the use of alternative rule models could improve the ability of fuzzy systems to represent a specific problem. In this proposal, an extended fuzzy rule model, that can include relations between variables in the antecedent of rules is presented. Furthermore, a learning algorithm based on the iterative genetic approach which is able to represent the knowledge using this model is proposed as well. On the other hand, potential relations among initial variables imply an exponential growth in the feasible rule search space. Consequently, two filters for detecting relevant potential relations are added to the learning algorithm. These filters allows to decrease the search space complexity and increase the algorithm efficiency. Finally, we also present an experimental study to demonstrate the benefits of using fuzzy relational rules.

  18. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    Science.gov (United States)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  19. Students Learn by Doing: Teaching about Rules of Thumb.

    Science.gov (United States)

    Cude, Brenda J.

    1990-01-01

    Identifies situation in which consumers are likely to substitute rules of thumb for research, reviews rules of thumb often used as substitutes, and identifies teaching activities to help students learn when substitution is appropriate. (JOW)

  20. The neuroscience of learning: beyond the Hebbian synapse.

    Science.gov (United States)

    Gallistel, C R; Matzel, Louis D

    2013-01-01

    From the traditional perspective of associative learning theory, the hypothesis linking modifications of synaptic transmission to learning and memory is plausible. It is less so from an information-processing perspective, in which learning is mediated by computations that make implicit commitments to physical and mathematical principles governing the domains where domain-specific cognitive mechanisms operate. We compare the properties of associative learning and memory to the properties of long-term potentiation, concluding that the properties of the latter do not explain the fundamental properties of the former. We briefly review the neuroscience of reinforcement learning, emphasizing the representational implications of the neuroscientific findings. We then review more extensively findings that confirm the existence of complex computations in three information-processing domains: probabilistic inference, the representation of uncertainty, and the representation of space. We argue for a change in the conceptual framework within which neuroscientists approach the study of learning mechanisms in the brain.

  1. Code-specific learning rules improve action selection by populations of spiking neurons.

    Science.gov (United States)

    Friedrich, Johannes; Urbanczik, Robert; Senn, Walter

    2014-08-01

    Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.

  2. The evolution of social learning rules: payoff-biased and frequency-dependent biased transmission.

    Science.gov (United States)

    Kendal, Jeremy; Giraldeau, Luc-Alain; Laland, Kevin

    2009-09-21

    Humans and other animals do not use social learning indiscriminately, rather, natural selection has favoured the evolution of social learning rules that make selective use of social learning to acquire relevant information in a changing environment. We present a gene-culture coevolutionary analysis of a small selection of such rules (unbiased social learning, payoff-biased social learning and frequency-dependent biased social learning, including conformism and anti-conformism) in a population of asocial learners where the environment is subject to a constant probability of change to a novel state. We define conditions under which each rule evolves to a genetically polymorphic equilibrium. We find that payoff-biased social learning may evolve under high levels of environmental variation if the fitness benefit associated with the acquired behaviour is either high or low but not of intermediate value. In contrast, both conformist and anti-conformist biases can become fixed when environment variation is low, whereupon the mean fitness in the population is higher than for a population of asocial learners. Our examination of the population dynamics reveals stable limit cycles under conformist and anti-conformist biases and some highly complex dynamics including chaos. Anti-conformists can out-compete conformists when conditions favour a low equilibrium frequency of the learned behaviour. We conclude that evolution, punctuated by the repeated successful invasion of different social learning rules, should continuously favour a reduction in the equilibrium frequency of asocial learning, and propose that, among competing social learning rules, the dominant rule will be the one that can persist with the lowest frequency of asocial learning.

  3. Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning.

    Science.gov (United States)

    van Ginneken, Bram

    2017-03-01

    Half a century ago, the term "computer-aided diagnosis" (CAD) was introduced in the scientific literature. Pulmonary imaging, with chest radiography and computed tomography, has always been one of the focus areas in this field. In this study, I describe how machine learning became the dominant technology for tackling CAD in the lungs, generally producing better results than do classical rule-based approaches, and how the field is now rapidly changing: in the last few years, we have seen how even better results can be obtained with deep learning. The key differences among rule-based processing, machine learning, and deep learning are summarized and illustrated for various applications of CAD in the chest.

  4. Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.

    Science.gov (United States)

    Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei

    2016-05-01

    Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.

  5. On-line learning of non-monotonic rules by simple perceptron

    OpenAIRE

    Inoue, Jun-ichi; Nishimori, Hidetoshi; Kabashima, Yoshiyuki

    1997-01-01

    We study the generalization ability of a simple perceptron which learns unlearnable rules. The rules are presented by a teacher perceptron with a non-monotonic transfer function. The student is trained in the on-line mode. The asymptotic behaviour of the generalization error is estimated under various conditions. Several learning strategies are proposed and improved to obtain the theoretical lower bound of the generalization error.

  6. Symbol manipulation and rule learning in spiking neuronal networks.

    Science.gov (United States)

    Fernando, Chrisantha

    2011-04-21

    It has been claimed that the productivity, systematicity and compositionality of human language and thought necessitate the existence of a physical symbol system (PSS) in the brain. Recent discoveries about temporal coding suggest a novel type of neuronal implementation of a physical symbol system. Furthermore, learning classifier systems provide a plausible algorithmic basis by which symbol re-write rules could be trained to undertake behaviors exhibiting systematicity and compositionality, using a kind of natural selection of re-write rules in the brain, We show how the core operation of a learning classifier system, namely, the replication with variation of symbol re-write rules, can be implemented using spike-time dependent plasticity based supervised learning. As a whole, the aim of this paper is to integrate an algorithmic and an implementation level description of a neuronal symbol system capable of sustaining systematic and compositional behaviors. Previously proposed neuronal implementations of symbolic representations are compared with this new proposal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    Science.gov (United States)

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun

    2009-07-01

    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  8. The statistical mechanics of learning a rule

    International Nuclear Information System (INIS)

    Watkin, T.L.H.; Rau, A.; Biehl, M.

    1993-01-01

    A summary is presented of the statistical mechanical theory of learning a rule with a neural network, a rapidly advancing area which is closely related to other inverse problems frequently encountered by physicists. By emphasizing the relationship between neural networks and strongly interacting physical systems, such as spin glasses, the authors show how learning theory has provided a workshop in which to develop new, exact analytical techniques

  9. Enhanced detection threshold for in vivo cortical stimulation produced by Hebbian conditioning

    Science.gov (United States)

    Rebesco, James M.; Miller, Lee E.

    2011-02-01

    Normal brain function requires constant adaptation, as an organism learns to associate important sensory stimuli with the appropriate motor actions. Neurological disorders may disrupt these learned associations and require the nervous system to reorganize itself. As a consequence, neural plasticity is a crucial component of normal brain function and a critical mechanism for recovery from injury. Associative, or Hebbian, pairing of pre- and post-synaptic activity has been shown to alter stimulus-evoked responses in vivo; however, to date, such protocols have not been shown to affect the animal's subsequent behavior. We paired stimulus trains separated by a brief time delay to two electrodes in rat sensorimotor cortex, which changed the statistical pattern of spikes during subsequent behavior. These changes were consistent with strengthened functional connections from the leading electrode to the lagging electrode. We then trained rats to respond to a microstimulation cue, and repeated the paradigm using the cue electrode as the leading electrode. This pairing lowered the rat's ICMS-detection threshold, with the same dependence on intra-electrode time lag that we found for the functional connectivity changes. The timecourse of the behavioral effects was very similar to that of the connectivity changes. We propose that the behavioral changes were a consequence of strengthened functional connections from the cue electrode to other regions of sensorimotor cortex. Such paradigms might be used to augment recovery from a stroke, or to promote adaptation in a bidirectional brain-machine interface.

  10. Learning a New Selection Rule in Visual and Frontal Cortex

    NARCIS (Netherlands)

    van der Togt, Chris; Stănişor, Liviu; Pooresmaeili, Arezoo; Albantakis, Larissa; Deco, Gustavo; Roelfsema, Pieter R

    2016-01-01

    How do you make a decision if you do not know the rules of the game? Models of sensory decision-making suggest that choices are slow if evidence is weak, but they may only apply if the subject knows the task rules. Here, we asked how the learning of a new rule influences neuronal activity in the

  11. Autonomous learning in gesture recognition by using lobe component analysis

    Science.gov (United States)

    Lu, Jian; Weng, Juyang

    2007-02-01

    Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.

  12. Strategies for adding adaptive learning mechanisms to rule-based diagnostic expert systems

    Science.gov (United States)

    Stclair, D. C.; Sabharwal, C. L.; Bond, W. E.; Hacke, Keith

    1988-01-01

    Rule-based diagnostic expert systems can be used to perform many of the diagnostic chores necessary in today's complex space systems. These expert systems typically take a set of symptoms as input and produce diagnostic advice as output. The primary objective of such expert systems is to provide accurate and comprehensive advice which can be used to help return the space system in question to nominal operation. The development and maintenance of diagnostic expert systems is time and labor intensive since the services of both knowledge engineer(s) and domain expert(s) are required. The use of adaptive learning mechanisms to increment evaluate and refine rules promises to reduce both time and labor costs associated with such systems. This paper describes the basic adaptive learning mechanisms of strengthening, weakening, generalization, discrimination, and discovery. Next basic strategies are discussed for adding these learning mechanisms to rule-based diagnostic expert systems. These strategies support the incremental evaluation and refinement of rules in the knowledge base by comparing the set of advice given by the expert system (A) with the correct diagnosis (C). Techniques are described for selecting those rules in the in the knowledge base which should participate in adaptive learning. The strategies presented may be used with a wide variety of learning algorithms. Further, these strategies are applicable to a large number of rule-based diagnostic expert systems. They may be used to provide either immediate or deferred updating of the knowledge base.

  13. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    Science.gov (United States)

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  14. Domain-specific and domain-general constraints on word and sequence learning.

    Science.gov (United States)

    Archibald, Lisa M D; Joanisse, Marc F

    2013-02-01

    The relative influences of language-related and memory-related constraints on the learning of novel words and sequences were examined by comparing individual differences in performance of children with and without specific deficits in either language or working memory. Children recalled lists of words in a Hebbian learning protocol in which occasional lists repeated, yielding improved recall over the course of the task on the repeated lists. The task involved presentation of pictures of common nouns followed immediately by equivalent presentations of the spoken names. The same participants also completed a paired-associate learning task involving word-picture and nonword-picture pairs. Hebbian learning was observed for all groups. Domain-general working memory constrained immediate recall, whereas language abilities impacted recall in the auditory modality only. In addition, working memory constrained paired-associate learning generally, whereas language abilities disproportionately impacted novel word learning. Overall, all of the learning tasks were highly correlated with domain-general working memory. The learning of nonwords was additionally related to general intelligence, phonological short-term memory, language abilities, and implicit learning. The results suggest that distinct associations between language- and memory-related mechanisms support learning of familiar and unfamiliar phonological forms and sequences.

  15. Role of Prefrontal Cortex in Learning and Generalizing Hierarchical Rules in 8-Month-Old Infants.

    Science.gov (United States)

    Werchan, Denise M; Collins, Anne G E; Frank, Michael J; Amso, Dima

    2016-10-05

    Recent research indicates that adults and infants spontaneously create and generalize hierarchical rule sets during incidental learning. Computational models and empirical data suggest that, in adults, this process is supported by circuits linking prefrontal cortex (PFC) with striatum and their modulation by dopamine, but the neural circuits supporting this form of learning in infants are largely unknown. We used near-infrared spectroscopy to record PFC activity in 8-month-old human infants during a simple audiovisual hierarchical-rule-learning task. Behavioral results confirmed that infants adopted hierarchical rule sets to learn and generalize spoken object-label mappings across different speaker contexts. Infants had increased activity over right dorsal lateral PFC when rule sets switched from one trial to the next, a neural marker related to updating rule sets into working memory in the adult literature. Infants' eye blink rate, a possible physiological correlate of striatal dopamine activity, also increased when rule sets switched from one trial to the next. Moreover, the increase in right dorsolateral PFC activity in conjunction with eye blink rate also predicted infants' generalization ability, providing exploratory evidence for frontostriatal involvement during learning. These findings provide evidence that PFC is involved in rudimentary hierarchical rule learning in 8-month-old infants, an ability that was previously thought to emerge later in life in concert with PFC maturation. Hierarchical rule learning is a powerful learning mechanism that allows rules to be selected in a context-appropriate fashion and transferred or reused in novel contexts. Data from computational models and adults suggests that this learning mechanism is supported by dopamine-innervated interactions between prefrontal cortex (PFC) and striatum. Here, we provide evidence that PFC also supports hierarchical rule learning during infancy, challenging the current dogma that PFC is an

  16. Learning and innovative elements of strategy adoption rules expand cooperative network topologies.

    Science.gov (United States)

    Wang, Shijun; Szalay, Máté S; Zhang, Changshui; Csermely, Peter

    2008-04-09

    Cooperation plays a key role in the evolution of complex systems. However, the level of cooperation extensively varies with the topology of agent networks in the widely used models of repeated games. Here we show that cooperation remains rather stable by applying the reinforcement learning strategy adoption rule, Q-learning on a variety of random, regular, small-word, scale-free and modular network models in repeated, multi-agent Prisoner's Dilemma and Hawk-Dove games. Furthermore, we found that using the above model systems other long-term learning strategy adoption rules also promote cooperation, while introducing a low level of noise (as a model of innovation) to the strategy adoption rules makes the level of cooperation less dependent on the actual network topology. Our results demonstrate that long-term learning and random elements in the strategy adoption rules, when acting together, extend the range of network topologies enabling the development of cooperation at a wider range of costs and temptations. These results suggest that a balanced duo of learning and innovation may help to preserve cooperation during the re-organization of real-world networks, and may play a prominent role in the evolution of self-organizing, complex systems.

  17. Category learning strategies in younger and older adults: Rule abstraction and memorization.

    Science.gov (United States)

    Wahlheim, Christopher N; McDaniel, Mark A; Little, Jeri L

    2016-06-01

    Despite the fundamental role of category learning in cognition, few studies have examined how this ability differs between younger and older adults. The present experiment examined possible age differences in category learning strategies and their effects on learning. Participants were trained on a category determined by a disjunctive rule applied to relational features. The utilization of rule- and exemplar-based strategies was indexed by self-reports and transfer performance. Based on self-reported strategies, the frequencies of rule- and exemplar-based learners were not significantly different between age groups, but there was a significantly higher frequency of intermediate learners (i.e., learners not identifying with a reliance on either rule- or exemplar-based strategies) in the older than younger adult group. Training performance was higher for younger than older adults regardless of the strategy utilized, showing that older adults were impaired in their ability to learn the correct rule or to remember exemplar-label associations. Transfer performance converged with strategy reports in showing higher fidelity category representations for younger adults. Younger adults with high working memory capacity were more likely to use an exemplar-based strategy, and older adults with high working memory capacity showed better training performance. Age groups did not differ in their self-reported memory beliefs, and these beliefs did not predict training strategies or performance. Overall, the present results contradict earlier findings that older adults prefer rule- to exemplar-based learning strategies, presumably to compensate for memory deficits. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Ensemble learning with trees and rules: supervised, semi-supervised, unsupervised

    Science.gov (United States)

    In this article, we propose several new approaches for post processing a large ensemble of conjunctive rules for supervised and semi-supervised learning problems. We show with various examples that for high dimensional regression problems the models constructed by the post processing the rules with ...

  19. A supervised learning rule for classification of spatiotemporal spike patterns.

    Science.gov (United States)

    Lilin Guo; Zhenzhong Wang; Adjouadi, Malek

    2016-08-01

    This study introduces a novel supervised algorithm for spiking neurons that take into consideration synapse delays and axonal delays associated with weights. It can be utilized for both classification and association and uses several biologically influenced properties, such as axonal and synaptic delays. This algorithm also takes into consideration spike-timing-dependent plasticity as in Remote Supervised Method (ReSuMe). This paper focuses on the classification aspect alone. Spiked neurons trained according to this proposed learning rule are capable of classifying different categories by the associated sequences of precisely timed spikes. Simulation results have shown that the proposed learning method greatly improves classification accuracy when compared to the Spike Pattern Association Neuron (SPAN) and the Tempotron learning rule.

  20. RuleML-Based Learning Object Interoperability on the Semantic Web

    Science.gov (United States)

    Biletskiy, Yevgen; Boley, Harold; Ranganathan, Girish R.

    2008-01-01

    Purpose: The present paper aims to describe an approach for building the Semantic Web rules for interoperation between heterogeneous learning objects, namely course outlines from different universities, and one of the rule uses: identifying (in)compatibilities between course descriptions. Design/methodology/approach: As proof of concept, a rule…

  1. RULE-BASE METHOD FOR ANALYSIS OF QUALITY E-LEARNING IN HIGHER EDUCATION

    Directory of Open Access Journals (Sweden)

    darsih darsih darsih

    2016-04-01

    Full Text Available ABSTRACT Assessing the quality of e-learning courses to measure the success of e-learning systems in online learning is essential. The system can be used to improve education. The study analyzes the quality of e-learning course on the web site www.kulon.undip.ac.id used a questionnaire with questions based on the variables of ISO 9126. Penilaiann Likert scale was used with a web app. Rule-base reasoning method is used to subject the quality of e-learningyang assessed. A case study conducted in four e-learning courses with 133 sample / respondents as users of the e-learning course. From the obtained results of research conducted both for the value of e-learning from each subject tested. In addition, each e-learning courses have different advantages depending on certain variables. Keywords : E-Learning, Rule-Base, Questionnaire, Likert, Measuring.

  2. Learning a Transferable Change Rule from a Recurrent Neural Network for Land Cover Change Detection

    Directory of Open Access Journals (Sweden)

    Haobo Lyu

    2016-06-01

    Full Text Available When exploited in remote sensing analysis, a reliable change rule with transfer ability can detect changes accurately and be applied widely. However, in practice, the complexity of land cover changes makes it difficult to use only one change rule or change feature learned from a given multi-temporal dataset to detect any other new target images without applying other learning processes. In this study, we consider the design of an efficient change rule having transferability to detect both binary and multi-class changes. The proposed method relies on an improved Long Short-Term Memory (LSTM model to acquire and record the change information of long-term sequence remote sensing data. In particular, a core memory cell is utilized to learn the change rule from the information concerning binary changes or multi-class changes. Three gates are utilized to control the input, output and update of the LSTM model for optimization. In addition, the learned rule can be applied to detect changes and transfer the change rule from one learned image to another new target multi-temporal image. In this study, binary experiments, transfer experiments and multi-class change experiments are exploited to demonstrate the superiority of our method. Three contributions of this work can be summarized as follows: (1 the proposed method can learn an effective change rule to provide reliable change information for multi-temporal images; (2 the learned change rule has good transferability for detecting changes in new target images without any extra learning process, and the new target images should have a multi-spectral distribution similar to that of the training images; and (3 to the authors’ best knowledge, this is the first time that deep learning in recurrent neural networks is exploited for change detection. In addition, under the framework of the proposed method, changes can be detected under both binary detection and multi-class change detection.

  3. Compensatory Processing During Rule-Based Category Learning in Older Adults

    Science.gov (United States)

    Bharani, Krishna L.; Paller, Ken A.; Reber, Paul J.; Weintraub, Sandra; Yanar, Jorge; Morrison, Robert G.

    2016-01-01

    Healthy older adults typically perform worse than younger adults at rule-based category learning, but better than patients with Alzheimer's or Parkinson's disease. To further investigate aging's effect on rule-based category learning, we monitored event-related potentials (ERPs) while younger and neuropsychologically typical older adults performed a visual category-learning task with a rule-based category structure and trial-by-trial feedback. Using these procedures, we previously identified ERPs sensitive to categorization strategy and accuracy in young participants. In addition, previous studies have demonstrated the importance of neural processing in the prefrontal cortex and the medial temporal lobe for this task. In this study, older adults showed lower accuracy and longer response times than younger adults, but there were two distinct subgroups of older adults. One subgroup showed near-chance performance throughout the procedure, never categorizing accurately. The other subgroup reached asymptotic accuracy that was equivalent to that in younger adults, although they categorized more slowly. These two subgroups were further distinguished via ERPs. Consistent with the compensation theory of cognitive aging, older adults who successfully learned showed larger frontal ERPs when compared with younger adults. Recruitment of prefrontal resources may have improved performance while slowing response times. Additionally, correlations of feedback-locked P300 amplitudes with category-learning accuracy differentiated successful younger and older adults. Overall, the results suggest that the ability to adapt one's behavior in response to feedback during learning varies across older individuals, and that the failure of some to adapt their behavior may reflect inadequate engagement of prefrontal cortex. PMID:26422522

  4. Rule-based category learning in children: the role of age and executive functioning.

    Directory of Open Access Journals (Sweden)

    Rahel Rabi

    Full Text Available Rule-based category learning was examined in 4-11 year-olds and adults. Participants were asked to learn a set of novel perceptual categories in a classification learning task. Categorization performance improved with age, with younger children showing the strongest rule-based deficit relative to older children and adults. Model-based analyses provided insight regarding the type of strategy being used to solve the categorization task, demonstrating that the use of the task appropriate strategy increased with age. When children and adults who identified the correct categorization rule were compared, the performance deficit was no longer evident. Executive functions were also measured. While both working memory and inhibitory control were related to rule-based categorization and improved with age, working memory specifically was found to marginally mediate the age-related improvements in categorization. When analyses focused only on the sample of children, results showed that working memory ability and inhibitory control were associated with categorization performance and strategy use. The current findings track changes in categorization performance across childhood, demonstrating at which points performance begins to mature and resemble that of adults. Additionally, findings highlight the potential role that working memory and inhibitory control may play in rule-based category learning.

  5. Learning of Alignment Rules between Concept Hierarchies

    Science.gov (United States)

    Ichise, Ryutaro; Takeda, Hideaki; Honiden, Shinichi

    With the rapid advances of information technology, we are acquiring much information than ever before. As a result, we need tools for organizing this data. Concept hierarchies such as ontologies and information categorizations are powerful and convenient methods for accomplishing this goal, which have gained wide spread acceptance. Although each concept hierarchy is useful, it is difficult to employ multiple concept hierarchies at the same time because it is hard to align their conceptual structures. This paper proposes a rule learning method that inputs information from a source concept hierarchy and finds suitable location for them in a target hierarchy. The key idea is to find the most similar categories in each hierarchy, where similarity is measured by the κ(kappa) statistic that counts instances belonging to both categories. In order to evaluate our method, we conducted experiments using two internet directories: Yahoo! and LYCOS. We map information instances from the source directory into the target directory, and show that our learned rules agree with a human-generated assignment 76% of the time.

  6. Unsupervised learning in neural networks with short range synapses

    Science.gov (United States)

    Brunnet, L. G.; Agnes, E. J.; Mizusaki, B. E. P.; Erichsen, R., Jr.

    2013-01-01

    Different areas of the brain are involved in specific aspects of the information being processed both in learning and in memory formation. For example, the hippocampus is important in the consolidation of information from short-term memory to long-term memory, while emotional memory seems to be dealt by the amygdala. On the microscopic scale the underlying structures in these areas differ in the kind of neurons involved, in their connectivity, or in their clustering degree but, at this level, learning and memory are attributed to neuronal synapses mediated by longterm potentiation and long-term depression. In this work we explore the properties of a short range synaptic connection network, a nearest neighbor lattice composed mostly by excitatory neurons and a fraction of inhibitory ones. The mechanism of synaptic modification responsible for the emergence of memory is Spike-Timing-Dependent Plasticity (STDP), a Hebbian-like rule, where potentiation/depression is acquired when causal/non-causal spikes happen in a synapse involving two neurons. The system is intended to store and recognize memories associated to spatial external inputs presented as simple geometrical forms. The synaptic modifications are continuously applied to excitatory connections, including a homeostasis rule and STDP. In this work we explore the different scenarios under which a network with short range connections can accomplish the task of storing and recognizing simple connected patterns.

  7. Learning of Rule Ensembles for Multiple Attribute Ranking Problems

    Science.gov (United States)

    Dembczyński, Krzysztof; Kotłowski, Wojciech; Słowiński, Roman; Szeląg, Marcin

    In this paper, we consider the multiple attribute ranking problem from a Machine Learning perspective. We propose two approaches to statistical learning of an ensemble of decision rules from decision examples provided by the Decision Maker in terms of pairwise comparisons of some objects. The first approach consists in learning a preference function defining a binary preference relation for a pair of objects. The result of application of this function on all pairs of objects to be ranked is then exploited using the Net Flow Score procedure, giving a linear ranking of objects. The second approach consists in learning a utility function for single objects. The utility function also gives a linear ranking of objects. In both approaches, the learning is based on the boosting technique. The presented approaches to Preference Learning share good properties of the decision rule preference model and have good performance in the massive-data learning problems. As Preference Learning and Multiple Attribute Decision Aiding share many concepts and methodological issues, in the introduction, we review some aspects bridging these two fields. To illustrate the two approaches proposed in this paper, we solve with them a toy example concerning the ranking of a set of cars evaluated by multiple attributes. Then, we perform a large data experiment on real data sets. The first data set concerns credit rating. Since recent research in the field of Preference Learning is motivated by the increasing role of modeling preferences in recommender systems and information retrieval, we chose two other massive data sets from this area - one comes from movie recommender system MovieLens, and the other concerns ranking of text documents from 20 Newsgroups data set.

  8. Differential impact of relevant and irrelevant dimension primes on rule-based and information-integration category learning.

    Science.gov (United States)

    Grimm, Lisa R; Maddox, W Todd

    2013-11-01

    Research has identified multiple category-learning systems with each being "tuned" for learning categories with different task demands and each governed by different neurobiological systems. Rule-based (RB) classification involves testing verbalizable rules for category membership while information-integration (II) classification requires the implicit learning of stimulus-response mappings. In the first study to directly test rule priming with RB and II category learning, we investigated the influence of the availability of information presented at the beginning of the task. Participants viewed lines that varied in length, orientation, and position on the screen, and were primed to focus on stimulus dimensions that were relevant or irrelevant to the correct classification rule. In Experiment 1, we used an RB category structure, and in Experiment 2, we used an II category structure. Accuracy and model-based analyses suggested that a focus on relevant dimensions improves RB task performance later in learning while a focus on an irrelevant dimension improves II task performance early in learning. © 2013.

  9. Robotic Assistance for Training Finger Movement Using a Hebbian Model: A Randomized Controlled Trial.

    Science.gov (United States)

    Rowe, Justin B; Chan, Vicky; Ingemanson, Morgan L; Cramer, Steven C; Wolbrecht, Eric T; Reinkensmeyer, David J

    2017-08-01

    Robots that physically assist movement are increasingly used in rehabilitation therapy after stroke, yet some studies suggest robotic assistance discourages effort and reduces motor learning. To determine the therapeutic effects of high and low levels of robotic assistance during finger training. We designed a protocol that varied the amount of robotic assistance while controlling the number, amplitude, and exerted effort of training movements. Participants (n = 30) with a chronic stroke and moderate hemiparesis (average Box and Blocks Test 32 ± 18 and upper extremity Fugl-Meyer score 46 ± 12) actively moved their index and middle fingers to targets to play a musical game similar to GuitarHero 3 h/wk for 3 weeks. The participants were randomized to receive high assistance (causing 82% success at hitting targets) or low assistance (55% success). Participants performed ~8000 movements during 9 training sessions. Both groups improved significantly at the 1-month follow-up on functional and impairment-based motor outcomes, on depression scores, and on self-efficacy of hand function, with no difference between groups in the primary endpoint (change in Box and Blocks). High assistance boosted motivation, as well as secondary motor outcomes (Fugl-Meyer and Lateral Pinch Strength)-particularly for individuals with more severe finger motor deficits. Individuals with impaired finger proprioception at baseline benefited less from the training. Robot-assisted training can promote key psychological outcomes known to modulate motor learning and retention. Furthermore, the therapeutic effectiveness of robotic assistance appears to derive at least in part from proprioceptive stimulation, consistent with a Hebbian plasticity model.

  10. Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity.

    Directory of Open Access Journals (Sweden)

    Christian Albers

    Full Text Available Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP. Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious and strong (teacher spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.

  11. Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity.

    Science.gov (United States)

    Albers, Christian; Westkott, Maren; Pawelzik, Klaus

    2016-01-01

    Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.

  12. Topic categorisation of statements in suicide notes with integrated rules and machine learning.

    Science.gov (United States)

    Kovačević, Aleksandar; Dehghan, Azad; Keane, John A; Nenadic, Goran

    2012-01-01

    We describe and evaluate an automated approach used as part of the i2b2 2011 challenge to identify and categorise statements in suicide notes into one of 15 topics, including Love, Guilt, Thankfulness, Hopelessness and Instructions. The approach combines a set of lexico-syntactic rules with a set of models derived by machine learning from a training dataset. The machine learning models rely on named entities, lexical, lexico-semantic and presentation features, as well as the rules that are applicable to a given statement. On a testing set of 300 suicide notes, the approach showed the overall best micro F-measure of up to 53.36%. The best precision achieved was 67.17% when only rules are used, whereas best recall of 50.57% was with integrated rules and machine learning. While some topics (eg, Sorrow, Anger, Blame) prove challenging, the performance for relatively frequent (eg, Love) and well-scoped categories (eg, Thankfulness) was comparatively higher (precision between 68% and 79%), suggesting that automated text mining approaches can be effective in topic categorisation of suicide notes.

  13. LTD windows of the STDP learning rule and synaptic connections having a large transmission delay enable robust sequence learning amid background noise.

    Science.gov (United States)

    Hayashi, Hatsuo; Igarashi, Jun

    2009-06-01

    Spike-timing-dependent synaptic plasticity (STDP) is a simple and effective learning rule for sequence learning. However, synapses being subject to STDP rules are readily influenced in noisy circumstances because synaptic conductances are modified by pre- and postsynaptic spikes elicited within a few tens of milliseconds, regardless of whether those spikes convey information or not. Noisy firing existing everywhere in the brain may induce irrelevant enhancement of synaptic connections through STDP rules and would result in uncertain memory encoding and obscure memory patterns. We will here show that the LTD windows of the STDP rules enable robust sequence learning amid background noise in cooperation with a large signal transmission delay between neurons and a theta rhythm, using a network model of the entorhinal cortex layer II with entorhinal-hippocampal loop connections. The important element of the present model for robust sequence learning amid background noise is the symmetric STDP rule having LTD windows on both sides of the LTP window, in addition to the loop connections having a large signal transmission delay and the theta rhythm pacing activities of stellate cells. Above all, the LTD window in the range of positive spike-timing is important to prevent influences of noise with the progress of sequence learning.

  14. Adaptive Learning Rule for Hardware-based Deep Neural Networks Using Electronic Synapse Devices

    OpenAIRE

    Lim, Suhwan; Bae, Jong-Ho; Eum, Jai-Ho; Lee, Sungtae; Kim, Chul-Heung; Kwon, Dongseok; Park, Byung-Gook; Lee, Jong-Ho

    2017-01-01

    In this paper, we propose a learning rule based on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network (HW-DNN) using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron net...

  15. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.

    Science.gov (United States)

    Auer, Peter; Burgsteiner, Harald; Maass, Wolfgang

    2008-06-01

    One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with values in [0,1] if one views the fraction of perceptrons that output 1 as the analog output of the parallel perceptron. Note that in contrast to the familiar model of a "multi-layer perceptron" the parallel perceptron that we consider here has just binary values as outputs of gates on the hidden layer. For a long time one has thought that there exists no competitive learning algorithm for these extremely simple neural networks, which also came to be known as committee machines. It is commonly assumed that one has to replace the hard threshold gates on the hidden layer by sigmoidal gates (or RBF-gates) and that one has to tune the weights on at least two successive layers in order to achieve satisfactory learning results for any class of neural networks that yield universal approximators. We show that this assumption is not true, by exhibiting a simple learning algorithm for parallel perceptrons - the parallel delta rule (p-delta rule). In contrast to backprop for multi-layer perceptrons, the p-delta rule only has to tune a single layer of weights, and it does not require the computation and communication of analog values with high precision. Reduced communication also distinguishes our new learning rule from other learning rules for parallel perceptrons such as MADALINE. Obviously these features make the p-delta rule attractive as a biologically more realistic alternative to backprop in biological neural circuits, but also for implementations in special purpose hardware. We show that the p-delta rule also implements gradient descent-with regard to a suitable error measure

  16. ERP evidence for conflict in contingency learning.

    Science.gov (United States)

    Whitehead, Peter S; Brewer, Gene A; Blais, Chris

    2017-07-01

    The proportion congruency effect refers to the observation that the magnitude of the Stroop effect increases as the proportion of congruent trials in a block increases. Contemporary work shows that proportion effects can be driven by both context and individual items, and are referred to as context-specific proportion congruency (CSPC) and item-specific proportion congruency (ISPC) effects, respectively. The conflict-modulated Hebbian learning account posits that these effects manifest from the same mechanism, while the parallel episodic processing model posits that the ISPC can occur by simple associative learning. Our prior work showed that the neural correlates of the CSPC is an N2 over frontocentral electrode sites approximately 300 ms after stimulus onset that predicts behavioral performance. There is strong consensus in the field that this N2 signal is associated with conflict detection in the medial frontal cortex. The experiment reported here assesses whether the same qualitative electrophysiological pattern of results holds for the ISPC. We find that the spatial topography of the N2 is similar but slightly delayed with a peak onset of approximately 300 ms after stimulus onset. We argue that this provides strong evidence that a single common mechanism-conflict-modulated Hebbian learning-drives both the ISPC and CSPC. © 2017 Society for Psychophysiological Research.

  17. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  18. Finding Influential Users in Social Media Using Association Rule Learning

    Directory of Open Access Journals (Sweden)

    Fredrik Erlandsson

    2016-04-01

    Full Text Available Influential users play an important role in online social networks since users tend to have an impact on one other. Therefore, the proposed work analyzes users and their behavior in order to identify influential users and predict user participation. Normally, the success of a social media site is dependent on the activity level of the participating users. For both online social networking sites and individual users, it is of interest to find out if a topic will be interesting or not. In this article, we propose association learning to detect relationships between users. In order to verify the findings, several experiments were executed based on social network analysis, in which the most influential users identified from association rule learning were compared to the results from Degree Centrality and Page Rank Centrality. The results clearly indicate that it is possible to identify the most influential users using association rule learning. In addition, the results also indicate a lower execution time compared to state-of-the-art methods.

  19. Comparison of Seven Methods for Boolean Factor Analysis and Their Evaluation by Information Gain

    Czech Academy of Sciences Publication Activity Database

    Frolov, A.; Húsek, Dušan; Polyakov, P.Y.

    2016-01-01

    Roč. 27, č. 3 (2016), s. 538-550 ISSN 2162-237X R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:67985807 Keywords : associative memory * bars problem (BP) * Boolean factor analysis (BFA) * data mining * dimension reduction * Hebbian learning rule * information gain * likelihood maximization (LM) * neural network application * recurrent neural network * statistics Subject RIV: IN - Informatics, Computer Science Impact factor: 6.108, year: 2016

  20. Fuzzy OLAP association rules mining-based modular reinforcement learning approach for multiagent systems.

    Science.gov (United States)

    Kaya, Mehmet; Alhajj, Reda

    2005-04-01

    Multiagent systems and data mining have recently attracted considerable attention in the field of computing. Reinforcement learning is the most commonly used learning process for multiagent systems. However, it still has some drawbacks, including modeling other learning agents present in the domain as part of the state of the environment, and some states are experienced much less than others, or some state-action pairs are never visited during the learning phase. Further, before completing the learning process, an agent cannot exhibit a certain behavior in some states that may be experienced sufficiently. In this study, we propose a novel multiagent learning approach to handle these problems. Our approach is based on utilizing the mining process for modular cooperative learning systems. It incorporates fuzziness and online analytical processing (OLAP) based mining to effectively process the information reported by agents. First, we describe a fuzzy data cube OLAP architecture which facilitates effective storage and processing of the state information reported by agents. This way, the action of the other agent, not even in the visual environment. of the agent under consideration, can simply be predicted by extracting online association rules, a well-known data mining technique, from the constructed data cube. Second, we present a new action selection model, which is also based on association rules mining. Finally, we generalize not sufficiently experienced states, by mining multilevel association rules from the proposed fuzzy data cube. Experimental results obtained on two different versions of a well-known pursuit domain show the robustness and effectiveness of the proposed fuzzy OLAP mining based modular learning approach. Finally, we tested the scalability of the approach presented in this paper and compared it with our previous work on modular-fuzzy Q-learning and ordinary Q-learning.

  1. Genetic learning in rule-based and neural systems

    Science.gov (United States)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  2. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition.

    Directory of Open Access Journals (Sweden)

    Johannes Bill

    Full Text Available During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.

  3. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition

    Science.gov (United States)

    Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert

    2015-01-01

    During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input. PMID:26284370

  4. Learning Dispatching Rules for Scheduling: A Synergistic View Comprising Decision Trees, Tabu Search and Simulation

    Directory of Open Access Journals (Sweden)

    Atif Shahzad

    2016-02-01

    Full Text Available A promising approach for an effective shop scheduling that synergizes the benefits of the combinatorial optimization, supervised learning and discrete-event simulation is presented. Though dispatching rules are in widely used by shop scheduling practitioners, only ordinary performance rules are known; hence, dynamic generation of dispatching rules is desired to make them more effective in changing shop conditions. Meta-heuristics are able to perform quite well and carry more knowledge of the problem domain, however at the cost of prohibitive computational effort in real-time. The primary purpose of this research lies in an offline extraction of this domain knowledge using decision trees to generate simple if-then rules that subsequently act as dispatching rules for scheduling in an online manner. We used similarity index to identify parametric and structural similarity in problem instances in order to implicitly support the learning algorithm for effective rule generation and quality index for relative ranking of the dispatching decisions. Maximum lateness is used as the scheduling objective in a job shop scheduling environment.

  5. A Machine Learning Approach to Discover Rules for Expressive Performance Actions in Jazz Guitar Music

    Science.gov (United States)

    Giraldo, Sergio I.; Ramirez, Rafael

    2016-01-01

    Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules

  6. A Machine Learning Approach to Discover Rules for Expressive Performance Actions in Jazz Guitar Music.

    Science.gov (United States)

    Giraldo, Sergio I; Ramirez, Rafael

    2016-01-01

    Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules.

  7. Transcranial infrared laser stimulation improves rule-based, but not information-integration, category learning in humans.

    Science.gov (United States)

    Blanco, Nathaniel J; Saucedo, Celeste L; Gonzalez-Lima, F

    2017-03-01

    This is the first randomized, controlled study comparing the cognitive effects of transcranial laser stimulation on category learning tasks. Transcranial infrared laser stimulation is a new non-invasive form of brain stimulation that shows promise for wide-ranging experimental and neuropsychological applications. It involves using infrared laser to enhance cerebral oxygenation and energy metabolism through upregulation of the respiratory enzyme cytochrome oxidase, the primary infrared photon acceptor in cells. Previous research found that transcranial infrared laser stimulation aimed at the prefrontal cortex can improve sustained attention, short-term memory, and executive function. In this study, we directly investigated the influence of transcranial infrared laser stimulation on two neurobiologically dissociable systems of category learning: a prefrontal cortex mediated reflective system that learns categories using explicit rules, and a striatally mediated reflexive learning system that forms gradual stimulus-response associations. Participants (n=118) received either active infrared laser to the lateral prefrontal cortex or sham (placebo) stimulation, and then learned one of two category structures-a rule-based structure optimally learned by the reflective system, or an information-integration structure optimally learned by the reflexive system. We found that prefrontal rule-based learning was substantially improved following transcranial infrared laser stimulation as compared to placebo (treatment X block interaction: F(1, 298)=5.117, p=0.024), while information-integration learning did not show significant group differences (treatment X block interaction: F(1, 288)=1.633, p=0.202). These results highlight the exciting potential of transcranial infrared laser stimulation for cognitive enhancement and provide insight into the neurobiological underpinnings of category learning. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. The efficiency of the RULES-4 classification learning algorithm in predicting the density of agents

    Directory of Open Access Journals (Sweden)

    Ziad Salem

    2014-12-01

    Full Text Available Learning is the act of obtaining new or modifying existing knowledge, behaviours, skills or preferences. The ability to learn is found in humans, other organisms and some machines. Learning is always based on some sort of observations or data such as examples, direct experience or instruction. This paper presents a classification algorithm to learn the density of agents in an arena based on the measurements of six proximity sensors of a combined actuator sensor units (CASUs. Rules are presented that were induced by the learning algorithm that was trained with data-sets based on the CASU’s sensor data streams collected during a number of experiments with “Bristlebots (agents in the arena (environment”. It was found that a set of rules generated by the learning algorithm is able to predict the number of bristlebots in the arena based on the CASU’s sensor readings with satisfying accuracy.

  9. New BFA Method Based on Attractor Neural Network and Likelihood Maximization

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.; Snášel, V.

    2014-01-01

    Roč. 132, 20 May (2014), s. 14-29 ISSN 0925-2312 Grant - others:GA MŠk(CZ) ED1.1.00/02.0070; GA MŠk(CZ) EE.2.3.20.0073 Program:ED Institutional support: RVO:67985807 Keywords : recurrent neural network * associative memory * Hebbian learning rule * neural network application * data mining * statistics * Boolean factor analysis * information gain * dimension reduction * likelihood-maximization * bars problem Subject RIV: IN - Informatics, Computer Science Impact factor: 2.083, year: 2014

  10. Mixing Languages during Learning? Testing the One Subject—One Language Rule

    Science.gov (United States)

    2015-01-01

    In bilingual communities, mixing languages is avoided in formal schooling: even if two languages are used on a daily basis for teaching, only one language is used to teach each given academic subject. This tenet known as the one subject-one language rule avoids mixing languages in formal schooling because it may hinder learning. The aim of this study was to test the scientific ground of this assumption by investigating the consequences of acquiring new concepts using a method in which two languages are mixed as compared to a purely monolingual method. Native balanced bilingual speakers of Basque and Spanish—adults (Experiment 1) and children (Experiment 2)—learnt new concepts by associating two different features to novel objects. Half of the participants completed the learning process in a multilingual context (one feature was described in Basque and the other one in Spanish); while the other half completed the learning phase in a purely monolingual context (both features were described in Spanish). Different measures of learning were taken, as well as direct and indirect indicators of concept consolidation. We found no evidence in favor of the non-mixing method when comparing the results of two groups in either experiment, and thus failed to give scientific support for the educational premise of the one subject—one language rule. PMID:26107624

  11. Mixing Languages during Learning? Testing the One Subject-One Language Rule.

    Directory of Open Access Journals (Sweden)

    Eneko Antón

    Full Text Available In bilingual communities, mixing languages is avoided in formal schooling: even if two languages are used on a daily basis for teaching, only one language is used to teach each given academic subject. This tenet known as the one subject-one language rule avoids mixing languages in formal schooling because it may hinder learning. The aim of this study was to test the scientific ground of this assumption by investigating the consequences of acquiring new concepts using a method in which two languages are mixed as compared to a purely monolingual method. Native balanced bilingual speakers of Basque and Spanish-adults (Experiment 1 and children (Experiment 2-learnt new concepts by associating two different features to novel objects. Half of the participants completed the learning process in a multilingual context (one feature was described in Basque and the other one in Spanish; while the other half completed the learning phase in a purely monolingual context (both features were described in Spanish. Different measures of learning were taken, as well as direct and indirect indicators of concept consolidation. We found no evidence in favor of the non-mixing method when comparing the results of two groups in either experiment, and thus failed to give scientific support for the educational premise of the one subject-one language rule.

  12. A Machine Learning Approach to Discover Rules for Expressive Performance Actions in Jazz Guitar Music

    Directory of Open Access Journals (Sweden)

    Sergio Ivan Giraldo

    2016-12-01

    Full Text Available Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1 quantitatively evaluate the accuracy of the induced models, (2 analyse the relative importance of the considered musical features, (3 discuss some of the learnt expressive performance rules in the context of previous work, and (4 assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules’ performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the

  13. The Aptitude-Treatment Interaction Effects on the Learning of Grammar Rules

    Science.gov (United States)

    Hwu, Fenfang; Sun, Shuyan

    2012-01-01

    The present study investigates the interaction between two types of explicit instructional approaches, deduction and explicit-induction, and the level of foreign language aptitude in the learning of grammar rules. Results indicate that on the whole the two equally explicit instructional approaches did not differentially affect learning…

  14. DCS-Neural-Network Program for Aircraft Control and Testing

    Science.gov (United States)

    Jorgensen, Charles C.

    2006-01-01

    A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.

  15. Effects of the Memorization of Rule Statements on Performance, Retention, and Transfer in a Computer-Based Learning Task.

    Science.gov (United States)

    Towle, Nelson J.

    Research sought to determine whether memorization of rule statements before, during or after instruction in rule application skills would facilitate the acquisition and/or retention of rule-governed behavior as compared to no-rule statement memorization. A computer-assisted instructional (CAI) program required high school students to learn to a…

  16. A SEMI-AUTOMATIC RULE SET BUILDING METHOD FOR URBAN LAND COVER CLASSIFICATION BASED ON MACHINE LEARNING AND HUMAN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    H. Y. Gu

    2017-09-01

    Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  17. Optimal monetary policy rules: the problem of stability under heterogeneous learning

    Czech Academy of Sciences Publication Activity Database

    Bogomolova, Anna; Kolyuzhnov, Dmitri

    -, č. 379 (2008), s. 1-34 ISSN 1211-3298 R&D Projects: GA MŠk LC542 Institutional research plan: CEZ:AV0Z70850503 Keywords : monetary policy rules * New Keynesian model * adaptive learning Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp379.pdf

  18. Lessons learned from the Maintenance Rule implementation at Northeast Utilities operating plants

    International Nuclear Information System (INIS)

    Hastings, K.B.; Khalil, Y.F.; Johnson, W.

    1996-01-01

    The Maintenance Rule as described in 10CFR50.65 requires holders of all operating nuclear power plants to monitor the performance of structures, systems, and components (SSCs) against licensee-established performance criteria. The Industry with the assistance of the Nuclear Energy Institute (NEI) developed a guideline, which includes all parts of the Maintenance Rule, to establish these performance criteria while incorporating safety and reliability of the operating plants. The NUMARC 93-01 Guideline introduced the term ''Risk Significant'' to categorize subsets of the SSCs which would require increased focus, from a Maintenance Rule perspective, in setting their performance criteria. Northeast Utilities Company (NU) operates five nuclear plants three at Millstone Station in Waterford, Connecticut; the Connecticut Yankee plant in Haddam Neck, Connecticut; and the Seabrook Station in Seabrook, New Hampshire. NU started the implementation process of the Maintenance Rule program at its five operating plants since early 1994, and have identified a population of risk significant SSCs at each plant. Recently, Northeast Utilities' Maintenance Rule Team re-examined the initial risk significant determinations to further refine these populations, and to establish consistencies among its operating units. As a result of the re-examination process, a number of inconsistencies and areas for improvement have been identified. The lessons learned provide valuable insights to consider in the future as one implements more risk based initiatives such as Graded QA and Risk-Based ISI and IST. This paper discusses the risk significance criteria, how Northeast Utilities utilized NUMARC 93-01 Guideline to determine the risk significant SSCs for its operating plants, and lessons learned. The results provided here do not include the Seabrook Station

  19. Large developing receptive fields using a distributed and locally reprogrammable address-event receiver.

    Science.gov (United States)

    Bamford, Simeon A; Murray, Alan F; Willshaw, David J

    2010-02-01

    A distributed and locally reprogrammable address-event receiver has been designed, in which incoming address-events are monitored simultaneously by all synapses, allowing for arbitrarily large axonal fan-out without reducing channel capacity. Synapses can change the address of their presynaptic neuron, allowing the distributed implementation of a biologically realistic learning rule, with both synapse formation and elimination (synaptic rewiring). Probabilistic synapse formation leads to topographic map development, made possible by a cross-chip current-mode calculation of Euclidean distance. As well as synaptic plasticity in rewiring, synapses change weights using a competitive Hebbian learning rule (spike-timing-dependent plasticity). The weight plasticity allows receptive fields to be modified based on spatio-temporal correlations in the inputs, and the rewiring plasticity allows these modifications to become embedded in the network topology.

  20. Effects of neonatal inferior prefrontal and medial temporal lesions on learning the rule for delayed nonmatching-to-sample.

    Science.gov (United States)

    Málková, L; Bachevalier, J; Webster, M; Mishkin, M

    2000-01-01

    The ability of rhesus monkeys to master the rule for delayed nonmatching-to-sample (DNMS) has a protracted ontogenetic development, reaching adult levels of proficiency around 4 to 5 years of age (Bachevalier, 1990). To test the possibility that this slow development could be due, at least in part, to immaturity of the prefrontal component of a temporo-prefrontal circuit important for DNMS rule learning (Kowalska, Bachevalier, & Mishkin, 1991; Weinstein, Saunders, & Mishkin, 1988), monkeys with neonatal lesions of the inferior prefrontal convexity were compared on DNMS with both normal controls and animals given neonatal lesions of the medial temporal lobe. Consistent with our previous results (Bachevalier & Mishkin, 1994; Málková, Mishkin, & Bachevalier, 1995), the neonatal medial temporal lesions led to marked impairment in rule learning (as well as in recognition memory with long delays and list lengths) at both 3 months and 2 years of age. By contrast, the neonatal inferior convexity lesions yielded no impairment in rule-learning at 3 months and only a mild impairment at 2 years, a finding that also contrasts sharply with the marked effects of the same lesion made in adulthood. This pattern of sparing closely resembles the one found earlier after neonatal lesions to the cortical visual area TE (Bachevalier & Mishkin, 1994; Málková et al., 1995). The functional sparing at 3 months probably reflects the fact that the temporo-prefrontal circuit is nonfunctional at this early age, resulting in a total dependency on medial temporal contributions to rule learning. With further development, however, this circuit begins to provide a supplementary route for learning.

  1. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  2. Change detection and change monitoring of natural and man-made features in multispectral and hyperspectral satellite imagery

    Science.gov (United States)

    Moody, Daniela Irina

    2018-04-17

    An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. A Hebbian learning rule may be used to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of pixel patches over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.

  3. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Alireza Alemi

    2015-08-01

    Full Text Available Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the

  4. Monetary Policy Rules, Learning and Stability: a Survey of the Recent Literature (In French)

    OpenAIRE

    Martin ZUMPE (GREThA UMR CNRS 5113)

    2010-01-01

    This paper presents the literature about econometric learning and its impact on the performances of monetary policy rules in the framework of the new canonical macroeconomic model. Rational expectations which are a building block of the original model can thus be replaced by expectations based on estimation algorithms. The permanent updating of these estimations can be interpreted as a learning proces of the model’s agents. This learning proces induces additional dynamics into the model. The ...

  5. Rule-bases construction through self-learning for a table-based Sugeno-Takagi fuzzy logic control system

    Directory of Open Access Journals (Sweden)

    C. Boldisor

    2009-12-01

    Full Text Available A self-learning based methodology for building the rule-base of a fuzzy logic controller (FLC is presented and verified, aiming to engage intelligent characteristics to a fuzzy logic control systems. The methodology is a simplified version of those presented in today literature. Some aspects are intentionally ignored since it rarely appears in control system engineering and a SISO process is considered here. The fuzzy inference system obtained is a table-based Sugeno-Takagi type. System’s desired performance is defined by a reference model and rules are extracted from recorded data, after the correct control actions are learned. The presented algorithm is tested in constructing the rule-base of a fuzzy controller for a DC drive application. System’s performances and method’s viability are analyzed.

  6. Bimodal stimulus timing-dependent plasticity in primary auditory cortex is altered after noise exposure with and without tinnitus.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2015-12-01

    Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. Copyright © 2015 the American Physiological Society.

  7. Learning "Rules" of Practice within the Context of the Practicum Triad: A Case Study of Learning to Teach

    Science.gov (United States)

    Chalies, Sebastien; Escalie, Guillaume; Stefano, Bertone; Clarke, Anthony

    2012-01-01

    This case study sought to determine the professional development circumstances in which a preservice teacher learned rules of practice (Wittgenstein, 1996) on practicum while interacting with a cooperating teacher and university supervisor. Borrowing from a theoretical conceptualization of teacher professional development based on the postulates…

  8. Criterial noise effects on rule-based category learning: the impact of delayed feedback.

    Science.gov (United States)

    Ell, Shawn W; Ing, A David; Maddox, W Todd

    2009-08-01

    Variability in the representation of the decision criterion is assumed in many category-learning models, yet few studies have directly examined its impact. On each trial, criterial noise should result in drift in the criterion and will negatively impact categorization accuracy, particularly in rule-based categorization tasks, where learning depends on the maintenance and manipulation of decision criteria. In three experiments, we tested this hypothesis and examined the impact of working memory on slowing the drift rate. In Experiment 1, we examined the effect of drift by inserting a 5-sec delay between the categorization response and the delivery of corrective feedback, and working memory demand was manipulated by varying the number of decision criteria to be learned. Delayed feedback adversely affected performance, but only when working memory demand was high. In Experiment 2, we built on a classic finding in the absolute identification literature and demonstrated that distributing the criteria across multiple dimensions decreases the impact of drift during the delay. In Experiment 3, we confirmed that the effect of drift during the delay is moderated by working memory. These results provide important insights into the interplay between criterial noise and working memory, as well as providing important constraints for models of rule-based category learning.

  9. Visual perceptual learning by operant conditioning training follows rules of contingency

    Science.gov (United States)

    Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo

    2015-01-01

    Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning. PMID:26028984

  10. Visual perceptual learning by operant conditioning training follows rules of contingency.

    Science.gov (United States)

    Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo

    2015-01-01

    Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning.

  11. A Learning Model for L/M Specificity in Ganglion Cells

    Science.gov (United States)

    Ahumada, Albert J.

    2016-01-01

    An unsupervised learning model for developing LM specific wiring at the ganglion cell level would support the research indicating LM specific wiring at the ganglion cell level (Reid and Shapley, 2002). Removing the contributions to the surround from cells of the same cone type improves the signal-to-noise ratio of the chromatic signals. The unsupervised learning model used is Hebbian associative learning, which strengthens the surround input connections according to the correlation of the output with the input. Since the surround units of the same cone type as the center are redundant with the center, their weights end up disappearing. This process can be thought of as a general mechanism for eliminating unnecessary cells in the nervous system.

  12. Functional requirements for reward-modulated spike-timing-dependent plasticity.

    Science.gov (United States)

    Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram

    2010-10-06

    Recent experiments have shown that spike-timing-dependent plasticity is influenced by neuromodulation. We derive theoretical conditions for successful learning of reward-related behavior for a large class of learning rules where Hebbian synaptic plasticity is conditioned on a global modulatory factor signaling reward. We show that all learning rules in this class can be separated into a term that captures the covariance of neuronal firing and reward and a second term that presents the influence of unsupervised learning. The unsupervised term, which is, in general, detrimental for reward-based learning, can be suppressed if the neuromodulatory signal encodes the difference between the reward and the expected reward-but only if the expected reward is calculated for each task and stimulus separately. If several tasks are to be learned simultaneously, the nervous system needs an internal critic that is able to predict the expected reward for arbitrary stimuli. We show that, with a critic, reward-modulated spike-timing-dependent plasticity is capable of learning motor trajectories with a temporal resolution of tens of milliseconds. The relation to temporal difference learning, the relevance of block-based learning paradigms, and the limitations of learning with a critic are discussed.

  13. Learning the rules of the rock-paper-scissors game: chimpanzees versus children.

    Science.gov (United States)

    Gao, Jie; Su, Yanjie; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2018-01-01

    The present study aimed to investigate whether chimpanzees (Pan troglodytes) could learn a transverse pattern by being trained in the rules of the rock-paper-scissors game in which "paper" beats "rock," "rock" beats "scissors," and "scissors" beats "paper." Additionally, this study compared the learning processes between chimpanzees and children. Seven chimpanzees were tested using a computer-controlled task. They were trained to choose the stronger of two options according to the game rules. The chimpanzees first engaged in the paper-rock sessions until they reached the learning criterion. Subsequently, they engaged in the rock-scissors and scissors-paper sessions, before progressing to sessions with all three pairs mixed. Five of the seven chimpanzees completed training after a mean of 307 sessions, which indicates that they learned the circular pattern. The chimpanzees required more scissors-paper sessions (14.29 ± 6.89), the third learnt pair, than paper-rock (1.71 ± 0.18) and rock-scissors (3.14 ± 0.70) sessions, suggesting they had difficulty finalizing the circularity. The chimpanzees then received generalization tests using new stimuli, which they learned quickly. A similar procedure was performed with children (35-71 months, n = 38) who needed the same number of trials for all three pairs during single-paired sessions. Their accuracy during the mixed-pair sessions improved with age and was better than chance from 50 months of age, which indicates that the ability to solve the transverse patterning problem might develop at around 4 years of age. The present findings show that chimpanzees were able to learn the task but had difficulties with circularity, whereas children learned the task more easily and developed the relevant ability at approximately 4 years of age. Furthermore, the chimpanzees' performance during the mixed-pair sessions was similar to that of 4-year-old children during the corresponding stage of training.

  14. Moral empiricism and the bias for act-based rules.

    Science.gov (United States)

    Ayars, Alisabeth; Nichols, Shaun

    2017-10-01

    Previous studies on rule learning show a bias in favor of act-based rules, which prohibit intentionally producing an outcome but not merely allowing the outcome. Nichols, Kumar, Lopez, Ayars, and Chan (2016) found that exposure to a single sample violation in which an agent intentionally causes the outcome was sufficient for participants to infer that the rule was act-based. One explanation is that people have an innate bias to think rules are act-based. We suggest an alternative empiricist account: since most rules that people learn are act-based, people form an overhypothesis (Goodman, 1955) that rules are typically act-based. We report three studies that indicate that people can use information about violations to form overhypotheses about rules. In study 1, participants learned either three "consequence-based" rules that prohibited allowing an outcome or three "act-based" rules that prohibiting producing the outcome; in a subsequent learning task, we found that participants who had learned three consequence-based rules were more likely to think that the new rule prohibited allowing an outcome. In study 2, we presented participants with either 1 consequence-based rule or 3 consequence-based rules, and we found that those exposed to 3 such rules were more likely to think that a new rule was also consequence based. Thus, in both studies, it seems that learning 3 consequence-based rules generates an overhypothesis to expect new rules to be consequence-based. In a final study, we used a more subtle manipulation. We exposed participants to examples act-based or accident-based (strict liability) laws and then had them learn a novel rule. We found that participants who were exposed to the accident-based laws were more likely to think a new rule was accident-based. The fact that participants' bias for act-based rules can be shaped by evidence from other rules supports the idea that the bias for act-based rules might be acquired as an overhypothesis from the

  15. A Theory of How Columns in the Neocortex Enable Learning the Structure of the World

    Directory of Open Access Journals (Sweden)

    Jeff Hawkins

    2017-10-01

    Full Text Available Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed.

  16. Associative Learning in Invertebrates

    Science.gov (United States)

    Hawkins, Robert D.; Byrne, John H.

    2015-01-01

    This work reviews research on neural mechanisms of two types of associative learning in the marine mollusk Aplysia, classical conditioning of the gill- and siphon-withdrawal reflex and operant conditioning of feeding behavior. Basic classical conditioning is caused in part by activity-dependent facilitation at sensory neuron–motor neuron (SN–MN) synapses and involves a hybrid combination of activity-dependent presynaptic facilitation and Hebbian potentiation, which are coordinated by trans-synaptic signaling. Classical conditioning also shows several higher-order features, which might be explained by the known circuit connections in Aplysia. Operant conditioning is caused in part by a different type of mechanism, an intrinsic increase in excitability of an identified neuron in the central pattern generator (CPG) for feeding. However, for both classical and operant conditioning, adenylyl cyclase is a molecular site of convergence of the two signals that are associated. Learning in other invertebrate preparations also involves many of the same mechanisms, which may contribute to learning in vertebrates as well. PMID:25877219

  17. Towards a general theory of neural computation based on prediction by single neurons.

    Directory of Open Access Journals (Sweden)

    Christopher D Fiorillo

    Full Text Available Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise". A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of

  18. Using Machine Learning Methods Jointly to Find Better Set of Rules in Data Mining

    Directory of Open Access Journals (Sweden)

    SUG Hyontai

    2017-01-01

    Full Text Available Rough set-based data mining algorithms are one of widely accepted machine learning technologies because of their strong mathematical background and capability of finding optimal rules based on given data sets only without room for prejudiced views to be inserted on the data. But, because the algorithms find rules very precisely, we may confront with the overfitting problem. On the other hand, association rule algorithms find rules of association, where the association resides between sets of items in database. The algorithms find itemsets that occur more than given minimum support, so that they can find the itemsets practically in reasonable time even for very large databases by supplying the minimum support appropriately. In order to overcome the problem of the overfitting problem in rough set-based algorithms, first we find large itemsets, after that we select attributes that cover the large itemsets. By using the selected attributes only, we may find better set of rules based on rough set theory. Results from experiments support our suggested method.

  19. Logic Learning Machine creates explicit and stable rules stratifying neuroblastoma patients.

    Science.gov (United States)

    Cangelosi, Davide; Blengio, Fabiola; Versteeg, Rogier; Eggert, Angelika; Garaventa, Alberto; Gambini, Claudio; Conte, Massimo; Eva, Alessandra; Muselli, Marco; Varesio, Luigi

    2013-01-01

    Neuroblastoma is the most common pediatric solid tumor. About fifty percent of high risk patients die despite treatment making the exploration of new and more effective strategies for improving stratification mandatory. Hypoxia is a condition of low oxygen tension occurring in poorly vascularized areas of the tumor associated with poor prognosis. We had previously defined a robust gene expression signature measuring the hypoxic component of neuroblastoma tumors (NB-hypo) which is a molecular risk factor. We wanted to develop a prognostic classifier of neuroblastoma patients' outcome blending existing knowledge on clinical and molecular risk factors with the prognostic NB-hypo signature. Furthermore, we were interested in classifiers outputting explicit rules that could be easily translated into the clinical setting. Shadow Clustering (SC) technique, which leads to final models called Logic Learning Machine (LLM), exhibits a good accuracy and promises to fulfill the aims of the work. We utilized this algorithm to classify NB-patients on the bases of the following risk factors: Age at diagnosis, INSS stage, MYCN amplification and NB-hypo. The algorithm generated explicit classification rules in good agreement with existing clinical knowledge. Through an iterative procedure we identified and removed from the dataset those examples which caused instability in the rules. This workflow generated a stable classifier very accurate in predicting good and poor outcome patients. The good performance of the classifier was validated in an independent dataset. NB-hypo was an important component of the rules with a strength similar to that of tumor staging. The novelty of our work is to identify stability, explicit rules and blending of molecular and clinical risk factors as the key features to generate classification rules for NB patients to be conveyed to the clinic and to be used to design new therapies. We derived, through LLM, a set of four stable rules identifying a new

  20. Logic Learning Machine creates explicit and stable rules stratifying neuroblastoma patients

    Science.gov (United States)

    2013-01-01

    Background Neuroblastoma is the most common pediatric solid tumor. About fifty percent of high risk patients die despite treatment making the exploration of new and more effective strategies for improving stratification mandatory. Hypoxia is a condition of low oxygen tension occurring in poorly vascularized areas of the tumor associated with poor prognosis. We had previously defined a robust gene expression signature measuring the hypoxic component of neuroblastoma tumors (NB-hypo) which is a molecular risk factor. We wanted to develop a prognostic classifier of neuroblastoma patients' outcome blending existing knowledge on clinical and molecular risk factors with the prognostic NB-hypo signature. Furthermore, we were interested in classifiers outputting explicit rules that could be easily translated into the clinical setting. Results Shadow Clustering (SC) technique, which leads to final models called Logic Learning Machine (LLM), exhibits a good accuracy and promises to fulfill the aims of the work. We utilized this algorithm to classify NB-patients on the bases of the following risk factors: Age at diagnosis, INSS stage, MYCN amplification and NB-hypo. The algorithm generated explicit classification rules in good agreement with existing clinical knowledge. Through an iterative procedure we identified and removed from the dataset those examples which caused instability in the rules. This workflow generated a stable classifier very accurate in predicting good and poor outcome patients. The good performance of the classifier was validated in an independent dataset. NB-hypo was an important component of the rules with a strength similar to that of tumor staging. Conclusions The novelty of our work is to identify stability, explicit rules and blending of molecular and clinical risk factors as the key features to generate classification rules for NB patients to be conveyed to the clinic and to be used to design new therapies. We derived, through LLM, a set of four

  1. Collaborative Working e-Learning Environments Supported by Rule-Based e-Tutor

    Directory of Open Access Journals (Sweden)

    Salaheddin Odeh

    2007-10-01

    Full Text Available Collaborative working environments for distance education sets a goal of convenience and an adaptation into our technologically advanced societies. To achieve this revolutionary new way of learning, environments must allow the different participants to communicate and coordinate with each other in a productive manner. Productivity and efficiency is obtained through synchronized communication between the different coordinating partners, which means that multiple users can execute an experiment simultaneously. Within this process, coordination can be accomplished by voice communication and chat tools. In recent times, multi-user environments have been successfully applied in many applications such as air traffic control systems, team-oriented military systems, chat text tools, and multi-player games. Thus, understanding the ideas and the techniques behind these systems can be of great significance regarding the contribution of newer ideas to collaborative working e-learning environments. However, many problems still exist in distance learning and tele-education, such as not finding the proper assistance while performing the remote experiment. Therefore, the students become overwhelmed and the experiment will fail. In this paper, we are going to discuss a solution that enables students to obtain an automated help by either a human tutor or a rule-based e-tutor (embedded rule-based system for the purpose of student support in complex remote experimentative environments. The technical implementation of the system can be realized by using the powerful Microsoft .NET, which offers a complete integrated developmental environment (IDE with a wide collection of products and technologies. Once the system is developed, groups of students are independently able to coordinate and to execute the experiment at any time and from any place, organizing the work between them positively.

  2. Using rule-based machine learning for candidate disease gene prioritization and sample classification of cancer gene expression data.

    Science.gov (United States)

    Glaab, Enrico; Bacardit, Jaume; Garibaldi, Jonathan M; Krasnogor, Natalio

    2012-01-01

    Microarray data analysis has been shown to provide an effective tool for studying cancer and genetic diseases. Although classical machine learning techniques have successfully been applied to find informative genes and to predict class labels for new samples, common restrictions of microarray analysis such as small sample sizes, a large attribute space and high noise levels still limit its scientific and clinical applications. Increasing the interpretability of prediction models while retaining a high accuracy would help to exploit the information content in microarray data more effectively. For this purpose, we evaluate our rule-based evolutionary machine learning systems, BioHEL and GAssist, on three public microarray cancer datasets, obtaining simple rule-based models for sample classification. A comparison with other benchmark microarray sample classifiers based on three diverse feature selection algorithms suggests that these evolutionary learning techniques can compete with state-of-the-art methods like support vector machines. The obtained models reach accuracies above 90% in two-level external cross-validation, with the added value of facilitating interpretation by using only combinations of simple if-then-else rules. As a further benefit, a literature mining analysis reveals that prioritizations of informative genes extracted from BioHEL's classification rule sets can outperform gene rankings obtained from a conventional ensemble feature selection in terms of the pointwise mutual information between relevant disease terms and the standardized names of top-ranked genes.

  3. An adaptable Boolean net trainable to control a computing robot

    International Nuclear Information System (INIS)

    Lauria, F. E.; Prevete, R.; Milo, M.; Visco, S.

    1999-01-01

    We discuss a method to implement in a Boolean neural network a Hebbian rule so to obtain an adaptable universal control system. We start by presenting both the Boolean neural net and the Hebbian rule we have considered. Then we discuss, first, the problems arising when the latter is naively implemented in a Boolean neural net, second, the method consenting us to overcome them and the ensuing adaptable Boolean neural net paradigm. Next, we present the adaptable Boolean neural net as an intelligent control system, actually controlling a writing robot, and discuss how to train it in the execution of the elementary arithmetic operations on operands represented by numerals with an arbitrary number of digits

  4. Applying cognitive developmental psychology to middle school physics learning: The rule assessment method

    Science.gov (United States)

    Hallinen, Nicole R.; Chi, Min; Chin, Doris B.; Prempeh, Joe; Blair, Kristen P.; Schwartz, Daniel L.

    2013-01-01

    Cognitive developmental psychology often describes children's growing qualitative understanding of the physical world. Physics educators may be able to use the relevant methods to advantage for characterizing changes in students' qualitative reasoning. Siegler developed the "rule assessment" method for characterizing levels of qualitative understanding for two factor situations (e.g., volume and mass for density). The method assigns children to rule levels that correspond to the degree they notice and coordinate the two factors. Here, we provide a brief tutorial plus a demonstration of how we have used this method to evaluate instructional outcomes with middle-school students who learned about torque, projectile motion, and collisions using different instructional methods with simulations.

  5. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Directory of Open Access Journals (Sweden)

    Ying-Lun Chen

    2015-08-01

    Full Text Available A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO, and the feature extraction is carried out by the generalized Hebbian algorithm (GHA. To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  6. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm.

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-08-13

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  7. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-01-01

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193

  8. AHaH Computing–From Metastable Switches to Attractors to Machine Learning

    Science.gov (United States)

    Nugent, Michael Alexander; Molter, Timothy Wesley

    2014-01-01

    Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. PMID:24520315

  9. AHaH computing-from metastable switches to attractors to machine learning.

    Directory of Open Access Journals (Sweden)

    Michael Alexander Nugent

    Full Text Available Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.

  10. Using rule-based machine learning for candidate disease gene prioritization and sample classification of cancer gene expression data.

    Directory of Open Access Journals (Sweden)

    Enrico Glaab

    Full Text Available Microarray data analysis has been shown to provide an effective tool for studying cancer and genetic diseases. Although classical machine learning techniques have successfully been applied to find informative genes and to predict class labels for new samples, common restrictions of microarray analysis such as small sample sizes, a large attribute space and high noise levels still limit its scientific and clinical applications. Increasing the interpretability of prediction models while retaining a high accuracy would help to exploit the information content in microarray data more effectively. For this purpose, we evaluate our rule-based evolutionary machine learning systems, BioHEL and GAssist, on three public microarray cancer datasets, obtaining simple rule-based models for sample classification. A comparison with other benchmark microarray sample classifiers based on three diverse feature selection algorithms suggests that these evolutionary learning techniques can compete with state-of-the-art methods like support vector machines. The obtained models reach accuracies above 90% in two-level external cross-validation, with the added value of facilitating interpretation by using only combinations of simple if-then-else rules. As a further benefit, a literature mining analysis reveals that prioritizations of informative genes extracted from BioHEL's classification rule sets can outperform gene rankings obtained from a conventional ensemble feature selection in terms of the pointwise mutual information between relevant disease terms and the standardized names of top-ranked genes.

  11. Cerebellar supervised learning revisited: biophysical modeling and degrees-of-freedom control.

    Science.gov (United States)

    Kawato, Mitsuo; Kuroda, Shinya; Schweighofer, Nicolas

    2011-10-01

    The biophysical models of spike-timing-dependent plasticity have explored dynamics with molecular basis for such computational concepts as coincidence detection, synaptic eligibility trace, and Hebbian learning. They overall support different learning algorithms in different brain areas, especially supervised learning in the cerebellum. Because a single spine is physically very small, chemical reactions at it are essentially stochastic, and thus sensitivity-longevity dilemma exists in the synaptic memory. Here, the cascade of excitable and bistable dynamics is proposed to overcome this difficulty. All kinds of learning algorithms in different brain regions confront with difficult generalization problems. For resolution of this issue, the control of the degrees-of-freedom can be realized by changing synchronicity of neural firing. Especially, for cerebellar supervised learning, the triangle closed-loop circuit consisting of Purkinje cells, the inferior olive nucleus, and the cerebellar nucleus is proposed as a circuit to optimally control synchronous firing and degrees-of-freedom in learning. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Beyond Motivation - History as a method for the learning of meta-discursive rules in mathematics

    DEFF Research Database (Denmark)

    Kjeldsen, Tinne Hoff; Blomhøj, Morten

    2012-01-01

    In this paper, we argue that history might have a profound role to play for learning mathematics by providing a self-evident (if not indispensable) strategy for revealing meta-discursive rules in mathematics and turning them into explicit objects of reflection for students. Our argument is based...

  13. Critical dynamics in associative memory networks

    Directory of Open Access Journals (Sweden)

    Maximilian eUhlig

    2013-07-01

    Full Text Available Critical behavior in neural networks is characterized by scale-free avalanche size distributions and can be explained by self-regulatory mechanisms. Theoretical and experimental evidence indicates that information storage capacity reaches its maximum in the critical regime. We study the effect of structural connectivity formed by Hebbian learning on the criticality of network dynamics. The network endowed with Hebbian learning only does not allow for simultaneous information storage and criticality. However, the critical regime is can be stabilized by short-term synaptic dynamics in the form of synaptic depression and facilitation or, alternatively, by homeostatic adaptation of the synaptic weights. We show that a heterogeneous distribution of maximal synaptic strengths does not preclude criticality if the Hebbian learning is alternated with periods of critical dynamics recovery. We discuss the relevance of these findings for the flexibility of memory in aging and with respect to the recent theory of synaptic plasticity.

  14. CACNA1C gene regulates behavioral strategies in operant rule learning.

    Science.gov (United States)

    Koppe, Georgia; Mallien, Anne Stephanie; Berger, Stefan; Bartsch, Dusan; Gass, Peter; Vollmayr, Barbara; Durstewitz, Daniel

    2017-06-01

    Behavioral experiments are usually designed to tap into a specific cognitive function, but animals may solve a given task through a variety of different and individual behavioral strategies, some of them not foreseen by the experimenter. Animal learning may therefore be seen more as the process of selecting among, and adapting, potential behavioral policies, rather than mere strengthening of associative links. Calcium influx through high-voltage-gated Ca2+ channels is central to synaptic plasticity, and altered expression of Cav1.2 channels and the CACNA1C gene have been associated with severe learning deficits and psychiatric disorders. Given this, we were interested in how specifically a selective functional ablation of the Cacna1c gene would modulate the learning process. Using a detailed, individual-level analysis of learning on an operant cue discrimination task in terms of behavioral strategies, combined with Bayesian selection among computational models estimated from the empirical data, we show that a Cacna1c knockout does not impair learning in general but has a much more specific effect: the majority of Cacna1c knockout mice still managed to increase reward feedback across trials but did so by adapting an outcome-based strategy, while the majority of matched controls adopted the experimentally intended cue-association rule. Our results thus point to a quite specific role of a single gene in learning and highlight that much more mechanistic insight could be gained by examining response patterns in terms of a larger repertoire of potential behavioral strategies. The results may also have clinical implications for treating psychiatric disorders.

  15. Rule-Governed and Contingency-Shaped Behavior of Learning-Disabled, Hyperactive, and Nonselected Elementary School Children.

    Science.gov (United States)

    Metzger, Mary Ann; Freund, Lisa

    The major purpose of this study was to describe the rule-governed and contingency-shaped behavior of learning-disabled, hyperactive, and nonselected elementary school children working on a computer-managed task. Hypotheses tested were (1) that the children would differ in the degree to which either instructions or external contingencies controlled…

  16. Analysis of Rules for Islamic Inheritance Law in Indonesia Using Hybrid Rule Based Learning

    Science.gov (United States)

    Khosyi'ah, S.; Irfan, M.; Maylawati, D. S.; Mukhlas, O. S.

    2018-01-01

    Along with the development of human civilization in Indonesia, the changes and reform of Islamic inheritance law so as to conform to the conditions and culture cannot be denied. The distribution of inheritance in Indonesia can be done automatically by storing the rule of Islamic inheritance law in the expert system. In this study, we analyze the knowledge of experts in Islamic inheritance in Indonesia and represent it in the form of rules using rule-based Forward Chaining (FC) and Davis-Putman-Logemann-Loveland (DPLL) algorithms. By hybridizing FC and DPLL algorithms, the rules of Islamic inheritance law in Indonesia are clearly defined and measured. The rules were conceptually validated by some experts in Islamic laws and informatics. The results revealed that generally all rules were ready for use in an expert system.

  17. MOLECULAR MECHANISMS OF FEAR LEARNING AND MEMORY

    Science.gov (United States)

    Johansen, Joshua P.; Cain, Christopher K.; Ostroff, Linnaea E.; LeDoux, Joseph E.

    2011-01-01

    Pavlovian fear conditioning is a useful behavioral paradigm for exploring the molecular mechanisms of learning and memory because a well-defined response to a specific environmental stimulus is produced through associative learning processes. Synaptic plasticity in the lateral nucleus of the amygdala (LA) underlies this form of associative learning. Here we summarize the molecular mechanisms that contribute to this synaptic plasticity in the context of auditory fear conditioning, the form of fear conditioning best understood at the molecular level. We discuss the neurotransmitter systems and signaling cascades that contribute to three phases of auditory fear conditioning: acquisition, consolidation, and reconsolidation. These studies suggest that multiple intracellular signaling pathways, including those triggered by activation of Hebbian processes and neuromodulatory receptors, interact to produce neural plasticity in the LA and behavioral fear conditioning. Together, this research illustrates the power of fear conditioning as a model system for characterizing the mechanisms of learning and memory in mammals, and potentially for understanding fear related disorders, such as PTSD and phobias. PMID:22036561

  18. Molecular mechanisms of fear learning and memory.

    Science.gov (United States)

    Johansen, Joshua P; Cain, Christopher K; Ostroff, Linnaea E; LeDoux, Joseph E

    2011-10-28

    Pavlovian fear conditioning is a particularly useful behavioral paradigm for exploring the molecular mechanisms of learning and memory because a well-defined response to a specific environmental stimulus is produced through associative learning processes. Synaptic plasticity in the lateral nucleus of the amygdala (LA) underlies this form of associative learning. Here, we summarize the molecular mechanisms that contribute to this synaptic plasticity in the context of auditory fear conditioning, the form of fear conditioning best understood at the molecular level. We discuss the neurotransmitter systems and signaling cascades that contribute to three phases of auditory fear conditioning: acquisition, consolidation, and reconsolidation. These studies suggest that multiple intracellular signaling pathways, including those triggered by activation of Hebbian processes and neuromodulatory receptors, interact to produce neural plasticity in the LA and behavioral fear conditioning. Collectively, this body of research illustrates the power of fear conditioning as a model system for characterizing the mechanisms of learning and memory in mammals and potentially for understanding fear-related disorders, such as PTSD and phobias. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Emergence of Functional Specificity in Balanced Networks with Synaptic Plasticity.

    Directory of Open Access Journals (Sweden)

    Sadra Sadeh

    2015-06-01

    Full Text Available In rodent visual cortex, synaptic connections between orientation-selective neurons are unspecific at the time of eye opening, and become to some degree functionally specific only later during development. An explanation for this two-stage process was proposed in terms of Hebbian plasticity based on visual experience that would eventually enhance connections between neurons with similar response features. For this to work, however, two conditions must be satisfied: First, orientation selective neuronal responses must exist before specific recurrent synaptic connections can be established. Second, Hebbian learning must be compatible with the recurrent network dynamics contributing to orientation selectivity, and the resulting specific connectivity must remain stable for unspecific background activity. Previous studies have mainly focused on very simple models, where the receptive fields of neurons were essentially determined by feedforward mechanisms, and where the recurrent network was small, lacking the complex recurrent dynamics of large-scale networks of excitatory and inhibitory neurons. Here we studied the emergence of functionally specific connectivity in large-scale recurrent networks with synaptic plasticity. Our results show that balanced random networks, which already exhibit highly selective responses at eye opening, can develop feature-specific connectivity if appropriate rules of synaptic plasticity are invoked within and between excitatory and inhibitory populations. If these conditions are met, the initial orientation selectivity guides the process of Hebbian learning and, as a result, functionally specific and a surplus of bidirectional connections emerge. Our results thus demonstrate the cooperation of synaptic plasticity and recurrent dynamics in large-scale functional networks with realistic receptive fields, highlight the role of inhibition as a critical element in this process, and paves the road for further computational

  20. Contrast normalization contributes to a biologically-plausible model of receptive-field development in primary visual cortex (V1)

    Science.gov (United States)

    Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.

    2012-01-01

    Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381

  1. Habituation: a non-associative learning rule design for spiking neurons and an autonomous mobile robots implementation

    International Nuclear Information System (INIS)

    Cyr, André; Boukadoum, Mounir

    2013-01-01

    This paper presents a novel bio-inspired habituation function for robots under control by an artificial spiking neural network. This non-associative learning rule is modelled at the synaptic level and validated through robotic behaviours in reaction to different stimuli patterns in a dynamical virtual 3D world. Habituation is minimally represented to show an attenuated response after exposure to and perception of persistent external stimuli. Based on current neurosciences research, the originality of this rule includes modulated response to variable frequencies of the captured stimuli. Filtering out repetitive data from the natural habituation mechanism has been demonstrated to be a key factor in the attention phenomenon, and inserting such a rule operating at multiple temporal dimensions of stimuli increases a robot's adaptive behaviours by ignoring broader contextual irrelevant information. (paper)

  2. Habituation: a non-associative learning rule design for spiking neurons and an autonomous mobile robots implementation.

    Science.gov (United States)

    Cyr, André; Boukadoum, Mounir

    2013-03-01

    This paper presents a novel bio-inspired habituation function for robots under control by an artificial spiking neural network. This non-associative learning rule is modelled at the synaptic level and validated through robotic behaviours in reaction to different stimuli patterns in a dynamical virtual 3D world. Habituation is minimally represented to show an attenuated response after exposure to and perception of persistent external stimuli. Based on current neurosciences research, the originality of this rule includes modulated response to variable frequencies of the captured stimuli. Filtering out repetitive data from the natural habituation mechanism has been demonstrated to be a key factor in the attention phenomenon, and inserting such a rule operating at multiple temporal dimensions of stimuli increases a robot's adaptive behaviours by ignoring broader contextual irrelevant information.

  3. Neuromodulated Synaptic Plasticity on the SpiNNaker Neuromorphic System

    Directory of Open Access Journals (Sweden)

    Mantas Mikaitis

    2018-02-01

    Full Text Available SpiNNaker is a digital neuromorphic architecture, designed specifically for the low power simulation of large-scale spiking neural networks at speeds close to biological real-time. Unlike other neuromorphic systems, SpiNNaker allows users to develop their own neuron and synapse models as well as specify arbitrary connectivity. As a result SpiNNaker has proved to be a powerful tool for studying different neuron models as well as synaptic plasticity—believed to be one of the main mechanisms behind learning and memory in the brain. A number of Spike-Timing-Dependent-Plasticity(STDP rules have already been implemented on SpiNNaker and have been shown to be capable of solving various learning tasks in real-time. However, while STDP is an important biological theory of learning, it is a form of Hebbian or unsupervised learning and therefore does not explain behaviors that depend on feedback from the environment. Instead, learning rules based on neuromodulated STDP (three-factor learning rules have been shown to be capable of solving reinforcement learning tasks in a biologically plausible manner. In this paper we demonstrate for the first time how a model of three-factor STDP, with the third-factor representing spikes from dopaminergic neurons, can be implemented on the SpiNNaker neuromorphic system. Using this learning rule we first show how reward and punishment signals can be delivered to a single synapse before going on to demonstrate it in a larger network which solves the credit assignment problem in a Pavlovian conditioning experiment. Because of its extra complexity, we find that our three-factor learning rule requires approximately 2× as much processing time as the existing SpiNNaker STDP learning rules. However, we show that it is still possible to run our Pavlovian conditioning model with up to 1 × 104 neurons in real-time, opening up new research opportunities for modeling behavioral learning on SpiNNaker.

  4. The Rule-Assessment Approach and Education.

    Science.gov (United States)

    Siegler, Robert S.

    1982-01-01

    This paper describes the rule-assessment approach to cognitive development. The basic question that motivated the rule-assessment approach is how people's existing knowledge influences their ability to learn. Research using the rule-assessment approach is summarized in terms of eight conclusions, each illustrated with empirical examples.…

  5. Binary translation using peephole translation rules

    Science.gov (United States)

    Bansal, Sorav; Aiken, Alex

    2010-05-04

    An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.

  6. Learning the Rules of the Game

    Science.gov (United States)

    Smith, Donald A.

    2018-03-01

    Games have often been used in the classroom to teach physics ideas and concepts, but there has been less published on games that can be used to teach scientific thinking. D. Maloney and M. Masters describe an activity in which students attempt to infer rules to a game from a history of moves, but the students don't actually play the game. Giving the list of moves allows the instructor to emphasize the important fact that nature usually gives us incomplete data sets, but it does make the activity less immersive. E. Kimmel suggested letting students attempt to figure out the rules to Reversi by playing it, but this game only has two players, which makes it difficult to apply in a classroom setting. Kimmel himself admits the choice of Reversi is somewhat arbitrary. There are games, however, that are designed to make the process of figuring out the rules an integral aspect of play. These games involve more people and require only a deck or two of cards. I present here an activity constructed around the card game Mao, which can be used to help students recognize aspects of scientific thinking. The game is particularly good at illustrating the importance of falsification tests (questions designed to elicit a negative answer) over verification tests (examples that confirm what is already suspected) for illuminating the underlying rules.

  7. Bridging Weighted Rules and Graph Random Walks for Statistical Relational Models

    Directory of Open Access Journals (Sweden)

    Seyed Mehran Kazemi

    2018-02-01

    Full Text Available The aim of statistical relational learning is to learn statistical models from relational or graph-structured data. Three main statistical relational learning paradigms include weighted rule learning, random walks on graphs, and tensor factorization. These paradigms have been mostly developed and studied in isolation for many years, with few works attempting at understanding the relationship among them or combining them. In this article, we study the relationship between the path ranking algorithm (PRA, one of the most well-known relational learning methods in the graph random walk paradigm, and relational logistic regression (RLR, one of the recent developments in weighted rule learning. We provide a simple way to normalize relations and prove that relational logistic regression using normalized relations generalizes the path ranking algorithm. This result provides a better understanding of relational learning, especially for the weighted rule learning and graph random walk paradigms. It opens up the possibility of using the more flexible RLR rules within PRA models and even generalizing both by including normalized and unnormalized relations in the same model.

  8. Single neurons in prefrontal cortex encode abstract rules.

    Science.gov (United States)

    Wallis, J D; Anderson, K C; Miller, E K

    2001-06-21

    The ability to abstract principles or rules from direct experience allows behaviour to extend beyond specific circumstances to general situations. For example, we learn the 'rules' for restaurant dining from specific experiences and can then apply them in new restaurants. The use of such rules is thought to depend on the prefrontal cortex (PFC) because its damage often results in difficulty in following rules. Here we explore its neural basis by recording from single neurons in the PFC of monkeys trained to use two abstract rules. They were required to indicate whether two successively presented pictures were the same or different depending on which rule was currently in effect. The monkeys performed this task with new pictures, thus showing that they had learned two general principles that could be applied to stimuli that they had not yet experienced. The most prevalent neuronal activity observed in the PFC reflected the coding of these abstract rules.

  9. Lessons learned from early implementation of the maintenance rule at nine nuclear power plants

    International Nuclear Information System (INIS)

    Petrone, C.D.; Correia, R.P.; Black, S.C.

    1995-06-01

    This report summarizes the lessons learned from the nine pilot site visits that were performed to review early implementation of the maintenance rule using the draft NRC Maintenance Inspection Procedure. Licensees followed NUMARC 93-01, ''Industry Guideline for Monitoring the Effectiveness of Maintenance at Nuclear Power Plants.'' In general, the licensees were thorough in determining which structures, systems, and components (SSCS) were within the scope of the maintenance rule at each site. The use of an expert panel was an appropriate and practical method of determining which SSCs are risk significant. When setting goals, all licensees considered safety but many licensees did not consider operating experience throughout the industry. Although required to do so, licensees were not monitoring at the system or train level the performance or condition for some systems used in standby service but not significant to risk. Most licensees had not established adequate monitoring of structures under the rule. Licensees established reasonable plans for doing periodic evaluations, balancing unavailability and reliability, and assessing the effect of taking equipment out of service for maintenance. However, these plans were not evaluated because they had not been fully implemented at the time of the site visits

  10. Facilitated stimulus-response associative learning and long-term memory in mice lacking the NTAN1 amidase of the N-end rule pathway.

    Science.gov (United States)

    Balogh, S A; McDowell, C S; Tae Kwon, Y; Denenberg, V H

    2001-02-23

    The N-end rule relates the in vivo half-life of a protein to the identity of its N-terminal residue. Inactivation of the NTAN1 gene encoding the asparagine-specific N-terminal amidase in mice results in impaired spatial memory [26]. The studies described here were designed to further characterize the effects upon learning and memory of inactivating the NTAN1 gene. NTAN1-deficient mice were found to be better than wild-type mice on black-white and horizontal-vertical discrimination learning. They were also better at 8-week Morris maze retention testing when a reversal trial was not included in the testing procedures. In all three tasks NTAN1-deficient mice appeared to use a strong win-stay strategy. It is concluded that inactivating the asparagine-specific branch of the N-end rule pathway in mice results in impaired spatial learning with concomitant compensatory restructuring of the nervous system in favor of non-spatial (stimulus-response) learning.

  11. Evaluation of Machine Learning and Rules-Based Approaches for Predicting Antimicrobial Resistance Profiles in Gram-negative Bacilli from Whole Genome Sequence Data.

    Science.gov (United States)

    Pesesky, Mitchell W; Hussain, Tahir; Wallace, Meghan; Patel, Sanket; Andleeb, Saadia; Burnham, Carey-Ann D; Dantas, Gautam

    2016-01-01

    The time-to-result for culture-based microorganism recovery and phenotypic antimicrobial susceptibility testing necessitates initial use of empiric (frequently broad-spectrum) antimicrobial therapy. If the empiric therapy is not optimal, this can lead to adverse patient outcomes and contribute to increasing antibiotic resistance in pathogens. New, more rapid technologies are emerging to meet this need. Many of these are based on identifying resistance genes, rather than directly assaying resistance phenotypes, and thus require interpretation to translate the genotype into treatment recommendations. These interpretations, like other parts of clinical diagnostic workflows, are likely to be increasingly automated in the future. We set out to evaluate the two major approaches that could be amenable to automation pipelines: rules-based methods and machine learning methods. The rules-based algorithm makes predictions based upon current, curated knowledge of Enterobacteriaceae resistance genes. The machine-learning algorithm predicts resistance and susceptibility based on a model built from a training set of variably resistant isolates. As our test set, we used whole genome sequence data from 78 clinical Enterobacteriaceae isolates, previously identified to represent a variety of phenotypes, from fully-susceptible to pan-resistant strains for the antibiotics tested. We tested three antibiotic resistance determinant databases for their utility in identifying the complete resistome for each isolate. The predictions of the rules-based and machine learning algorithms for these isolates were compared to results of phenotype-based diagnostics. The rules based and machine-learning predictions achieved agreement with standard-of-care phenotypic diagnostics of 89.0 and 90.3%, respectively, across twelve antibiotic agents from six major antibiotic classes. Several sources of disagreement between the algorithms were identified. Novel variants of known resistance factors and

  12. Evaluation of Machine Learning and Rules-Based Approaches for Predicting Antimicrobial Resistance Profiles in Gram-negative Bacilli from Whole Genome Sequence Data

    Directory of Open Access Journals (Sweden)

    Mitchell Pesesky

    2016-11-01

    Full Text Available The time-to-result for culture-based microorganism recovery and phenotypic antimicrobial susceptibility testing necessitate initial use of empiric (frequently broad-spectrum antimicrobial therapy. If the empiric therapy is not optimal, this can lead to adverse patient outcomes and contribute to increasing antibiotic resistance in pathogens. New, more rapid technologies are emerging to meet this need. Many of these are based on identifying resistance genes, rather than directly assaying resistance phenotypes, and thus require interpretation to translate the genotype into treatment recommendations. These interpretations, like other parts of clinical diagnostic workflows, are likely to be increasingly automated in the future. We set out to evaluate the two major approaches that could be amenable to automation pipelines: rules-based methods and machine learning methods. The rules-based algorithm makes predictions based upon current, curated knowledge of Enterobacteriaceae resistance genes. The machine-learning algorithm predicts resistance and susceptibility based on a model built from a training set of variably resistant isolates. As our test set, we used whole genome sequence data from 78 clinical Enterobacteriaceae isolates, previously identified to represent a variety of phenotypes, from fully-susceptible to pan-resistant strains for the antibiotics tested. We tested three antibiotic resistance determinant databases for their utility in identifying the complete resistome for each isolate. The predictions of the rules-based and machine learning algorithms for these isolates were compared to results of phenotype-based diagnostics. The rules based and machine-learning predictions achieved agreement with standard-of-care phenotypic diagnostics of 89.0% and 90.3%, respectively, across twelve antibiotic agents from six major antibiotic classes. Several sources of disagreement between the algorithms were identified. Novel variants of known resistance

  13. Trading Rules on Stock Markets Using Genetic Network Programming with Reinforcement Learning and Importance Index

    Science.gov (United States)

    Mabu, Shingo; Hirasawa, Kotaro; Furuzuki, Takayuki

    Genetic Network Programming (GNP) is an evolutionary computation which represents its solutions using graph structures. Since GNP can create quite compact programs and has an implicit memory function, it has been clarified that GNP works well especially in dynamic environments. In addition, a study on creating trading rules on stock markets using GNP with Importance Index (GNP-IMX) has been done. IMX is a new element which is a criterion for decision making. In this paper, we combined GNP-IMX with Actor-Critic (GNP-IMX&AC) and create trading rules on stock markets. Evolution-based methods evolve their programs after enough period of time because they must calculate fitness values, however reinforcement learning can change programs during the period, therefore the trading rules can be created efficiently. In the simulation, the proposed method is trained using the stock prices of 10 brands in 2002 and 2003. Then the generalization ability is tested using the stock prices in 2004. The simulation results show that the proposed method can obtain larger profits than GNP-IMX without AC and Buy&Hold.

  14. Improving drivers' knowledge of road rules using digital games.

    Science.gov (United States)

    Li, Qing; Tay, Richard

    2014-04-01

    Although a proficient knowledge of the road rules is important to safe driving, many drivers do not retain the knowledge acquired after they have obtained their licenses. Hence, more innovative and appealing methods are needed to improve drivers' knowledge of the road rules. This study examines the effect of game based learning on drivers' knowledge acquisition and retention. We find that playing an entertaining game that is designed to impart knowledge of the road rules not only improves players' knowledge but also helps them retain such knowledge. Hence, learning by gaming appears to be a promising learning approach for driver education. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. QCD Sum Rules, a Modern Perspective

    CERN Document Server

    Colangelo, Pietro; Colangelo, Pietro; Khodjamirian, Alexander

    2001-01-01

    An introduction to the method of QCD sum rules is given for those who want to learn how to use this method. Furthermore, we discuss various applications of sum rules, from the determination of quark masses to the calculation of hadronic form factors and structure functions. Finally, we explain the idea of the light-cone sum rules and outline the recent development of this approach.

  16. Decomposition of Rotor Hopfield Neural Networks Using Complex Numbers.

    Science.gov (United States)

    Kobayashi, Masaki

    2018-04-01

    A complex-valued Hopfield neural network (CHNN) is a multistate model of a Hopfield neural network. It has the disadvantage of low noise tolerance. Meanwhile, a symmetric CHNN (SCHNN) is a modification of a CHNN that improves noise tolerance. Furthermore, a rotor Hopfield neural network (RHNN) is an extension of a CHNN. It has twice the storage capacity of CHNNs and SCHNNs, and much better noise tolerance than CHNNs, although it requires twice many connection parameters. In this brief, we investigate the relations between CHNN, SCHNN, and RHNN; an RHNN is uniquely decomposed into a CHNN and SCHNN. In addition, the Hebbian learning rule for RHNNs is decomposed into those for CHNNs and SCHNNs.

  17. Learning to Learn.

    Science.gov (United States)

    Weiss, Helen; Weiss, Martin

    1988-01-01

    The article reviews theories of learning (e.g., stimulus-response, trial and error, operant conditioning, cognitive), considers the role of motivation, and summarizes nine research-supported rules of effective learning. Suggestions are applied to teaching learning strategies to learning-disabled students. (DB)

  18. Global adaptation in networks of selfish components: emergent associative memory at the system scale.

    Science.gov (United States)

    Watson, Richard A; Mills, Rob; Buckley, C L

    2011-01-01

    In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organize into structures that enhance global adaptation, efficiency, or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology, and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalization, and optimization are well understood. Such global functions within a single agent or organism are not wholly surprising, since the mechanisms (e.g., Hebbian learning) that create these neural organizations may be selected for this purpose; but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviors when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g., when they can influence which other agents they interact with), then, in adapting these inter-agent relationships to maximize their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviors as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalize by idealizing stored patterns and/or creating new combinations of subpatterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviors in the same sense, and by the same mechanism, as with the organizational

  19. Predictive Acoustic Tracking with an Adaptive Neural Mechanism

    DEFF Research Database (Denmark)

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    model of the lizard peripheral auditory system to extract information regarding sound direction. This information is utilised by a neural machinery to learn the acoustic signal’s velocity through fast and unsupervised correlation-based learning adapted from differential Hebbian learning. This approach...

  20. Comparison of Natural Language Processing Rules-based and Machine-learning Systems to Identify Lumbar Spine Imaging Findings Related to Low Back Pain.

    Science.gov (United States)

    Tan, W Katherine; Hassanpour, Saeed; Heagerty, Patrick J; Rundell, Sean D; Suri, Pradeep; Huhdanpaa, Hannu T; James, Kathryn; Carrell, David S; Langlotz, Curtis P; Organ, Nancy L; Meier, Eric N; Sherman, Karen J; Kallmes, David F; Luetmer, Patrick H; Griffith, Brent; Nerenz, David R; Jarvik, Jeffrey G

    2018-03-28

    To evaluate a natural language processing (NLP) system built with open-source tools for identification of lumbar spine imaging findings related to low back pain on magnetic resonance and x-ray radiology reports from four health systems. We used a limited data set (de-identified except for dates) sampled from lumbar spine imaging reports of a prospectively assembled cohort of adults. From N = 178,333 reports, we randomly selected N = 871 to form a reference-standard dataset, consisting of N = 413 x-ray reports and N = 458 MR reports. Using standardized criteria, four spine experts annotated the presence of 26 findings, where 71 reports were annotated by all four experts and 800 were each annotated by two experts. We calculated inter-rater agreement and finding prevalence from annotated data. We randomly split the annotated data into development (80%) and testing (20%) sets. We developed an NLP system from both rule-based and machine-learned models. We validated the system using accuracy metrics such as sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). The multirater annotated dataset achieved inter-rater agreement of Cohen's kappa > 0.60 (substantial agreement) for 25 of 26 findings, with finding prevalence ranging from 3% to 89%. In the testing sample, rule-based and machine-learned predictions both had comparable average specificity (0.97 and 0.95, respectively). The machine-learned approach had a higher average sensitivity (0.94, compared to 0.83 for rules-based), and a higher overall AUC (0.98, compared to 0.90 for rules-based). Our NLP system performed well in identifying the 26 lumbar spine findings, as benchmarked by reference-standard annotation by medical experts. Machine-learned models provided substantial gains in model sensitivity with slight loss of specificity, and overall higher AUC. Copyright © 2018 The Association of University Radiologists. All rights reserved.

  1. Value learning through reinforcement : The basics of dopamine and reinforcement learning

    NARCIS (Netherlands)

    Daw, N.D.; Tobler, P.N.; Glimcher, P.W.; Fehr, E.

    2013-01-01

    This chapter provides an overview of reinforcement learning and temporal difference learning and relates these topics to the firing properties of midbrain dopamine neurons. First, we review the RescorlaWagner learning rule and basic learning phenomena, such as blocking, which the rule explains. Then

  2. Autonomous dynamics in neural networks: the dHAN concept and associative thought processes

    Science.gov (United States)

    Gros, Claudius

    2007-02-01

    The neural activity of the human brain is dominated by self-sustained activities. External sensory stimuli influence this autonomous activity but they do not drive the brain directly. Most standard artificial neural network models are however input driven and do not show spontaneous activities. It constitutes a challenge to develop organizational principles for controlled, self-sustained activity in artificial neural networks. Here we propose and examine the dHAN concept for autonomous associative thought processes in dense and homogeneous associative networks. An associative thought-process is characterized, within this approach, by a time-series of transient attractors. Each transient state corresponds to a stored information, a memory. The subsequent transient states are characterized by large associative overlaps, which are identical to acquired patterns. Memory states, the acquired patterns, have such a dual functionality. In this approach the self-sustained neural activity has a central functional role. The network acquires a discrimination capability, as external stimuli need to compete with the autonomous activity. Noise in the input is readily filtered-out. Hebbian learning of external patterns occurs coinstantaneous with the ongoing associative thought process. The autonomous dynamics needs a long-term working-point optimization which acquires within the dHAN concept a dual functionality: It stabilizes the time development of the associative thought process and limits runaway synaptic growth, which generically occurs otherwise in neural networks with self-induced activities and Hebbian-type learning rules.

  3. Different neurophysiological mechanisms underlying word and rule extraction from speech.

    Directory of Open Access Journals (Sweden)

    Ruth De Diego Balaguer

    Full Text Available The initial process of identifying words from spoken language and the detection of more subtle regularities underlying their structure are mandatory processes for language acquisition. Little is known about the cognitive mechanisms that allow us to extract these two types of information and their specific time-course of acquisition following initial contact with a new language. We report time-related electrophysiological changes that occurred while participants learned an artificial language. These changes strongly correlated with the discovery of the structural rules embedded in the words. These changes were clearly different from those related to word learning and occurred during the first minutes of exposition. There is a functional distinction in the nature of the electrophysiological signals during acquisition: an increase in negativity (N400 in the central electrodes is related to word-learning and development of a frontal positivity (P2 is related to rule-learning. In addition, the results of an online implicit and a post-learning test indicate that, once the rules of the language have been acquired, new words following the rule are processed as words of the language. By contrast, new words violating the rule induce syntax-related electrophysiological responses when inserted online in the stream (an early frontal negativity followed by a late posterior positivity and clear lexical effects when presented in isolation (N400 modulation. The present study provides direct evidence suggesting that the mechanisms to extract words and structural dependencies from continuous speech are functionally segregated. When these mechanisms are engaged, the electrophysiological marker associated with rule-learning appears very quickly, during the earliest phases of exposition to a new language.

  4. Timing is not everything: neuromodulation opens the STDP gate

    Directory of Open Access Journals (Sweden)

    Verena Pawlak

    2010-10-01

    Full Text Available Spike timing dependent plasticity (STDP is a temporally specific extension of Hebbian associative plasticity that has tied together the timing of presynaptic inputs relative to the postsynaptic single spike. However, it is difficult to translate this mechanism to in vivo conditions where there is an abundance of presynaptic activity constantly impinging upon the dendritic tree as well as ongoing postsynaptic spiking activity that backpropagates along the dendrite. Theoretical studies have proposed that, in addition to this pre- and postsynaptic activity, a ‘third factor’ would enable the association of specific inputs to specific outputs. Experimentally, the picture that is beginning to emerge, is that in addition to the precise timing of pre- and postsynaptic spikes, this third factor involves neuromodulators that have a distinctive influence on STDP rules. Specifically, neuromodulatory systems can influence STDP rules by acting via dopaminergic, noradrenergic, muscarinic and nicotinic receptors. Neuromodulator actions can enable STDP induction or - by increasing or decreasing the threshold - can change the conditions for plasticity induction. Because some of the neuromodulators are also involved in reward, a link between STDP and reward-mediated learning is emerging. However, many outstanding questions concerning the relationship between neuromodulatory systems and STDP rules remain, that once solved, will help make the crucial link from timing-based synaptic plasticity rules to behaviorally-based learning.

  5. Rule Extraction Based on Extreme Learning Machine and an Improved Ant-Miner Algorithm for Transient Stability Assessment.

    Directory of Open Access Journals (Sweden)

    Yang Li

    Full Text Available In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA methods, a new rule extraction method based on extreme learning machine (ELM and an improved Ant-miner (IAM algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

  6. Rule Extraction Based on Extreme Learning Machine and an Improved Ant-Miner Algorithm for Transient Stability Assessment.

    Science.gov (United States)

    Li, Yang; Li, Guoqing; Wang, Zhenhao

    2015-01-01

    In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

  7. Improving KPCA Online Extraction by Orthonormalization in the Feature Space.

    Science.gov (United States)

    Souza Filho, Joao B O; Diniz, Paulo S R

    2018-04-01

    Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.

  8. Functional networks inference from rule-based machine learning models.

    Science.gov (United States)

    Lazzarini, Nicola; Widera, Paweł; Williamson, Stuart; Heer, Rakesh; Krasnogor, Natalio; Bacardit, Jaume

    2016-01-01

    Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The

  9. Biclustering Learning of Trading Rules.

    Science.gov (United States)

    Huang, Qinghua; Wang, Ting; Tao, Dacheng; Li, Xuelong

    2015-10-01

    Technical analysis with numerous indicators and patterns has been regarded as important evidence for making trading decisions in financial markets. However, it is extremely difficult for investors to find useful trading rules based on numerous technical indicators. This paper innovatively proposes the use of biclustering mining to discover effective technical trading patterns that contain a combination of indicators from historical financial data series. This is the first attempt to use biclustering algorithm on trading data. The mined patterns are regarded as trading rules and can be classified as three trading actions (i.e., the buy, the sell, and no-action signals) with respect to the maximum support. A modified K nearest neighborhood ( K -NN) method is applied to classification of trading days in the testing period. The proposed method [called biclustering algorithm and the K nearest neighbor (BIC- K -NN)] was implemented on four historical datasets and the average performance was compared with the conventional buy-and-hold strategy and three previously reported intelligent trading systems. Experimental results demonstrate that the proposed trading system outperforms its counterparts and will be useful for investment in various financial markets.

  10. Evolving fuzzy rules for relaxed-criteria negotiation.

    Science.gov (United States)

    Sim, Kwang Mong

    2008-12-01

    In the literature on automated negotiation, very few negotiation agents are designed with the flexibility to slightly relax their negotiation criteria to reach a consensus more rapidly and with more certainty. Furthermore, these relaxed-criteria negotiation agents were not equipped with the ability to enhance their performance by learning and evolving their relaxed-criteria negotiation rules. The impetus of this work is designing market-driven negotiation agents (MDAs) that not only have the flexibility of relaxing bargaining criteria using fuzzy rules, but can also evolve their structures by learning new relaxed-criteria fuzzy rules to improve their negotiation outcomes as they participate in negotiations in more e-markets. To this end, an evolutionary algorithm for adapting and evolving relaxed-criteria fuzzy rules was developed. Implementing the idea in a testbed, two kinds of experiments for evaluating and comparing EvEMDAs (MDAs with relaxed-criteria rules that are evolved using the evolutionary algorithm) and EMDAs (MDAs with relaxed-criteria rules that are manually constructed) were carried out through stochastic simulations. Empirical results show that: 1) EvEMDAs generally outperformed EMDAs in different types of e-markets and 2) the negotiation outcomes of EvEMDAs generally improved as they negotiated in more e-markets.

  11. Path integration of head direction: updating a packet of neural activity at the correct speed using neuronal time constants.

    Science.gov (United States)

    Walters, D M; Stringer, S M

    2010-07-01

    A key question in understanding the neural basis of path integration is how individual, spatially responsive, neurons may self-organize into networks that can, through learning, integrate velocity signals to update a continuous representation of location within an environment. It is of vital importance that this internal representation of position is updated at the correct speed, and in real time, to accurately reflect the motion of the animal. In this article, we present a biologically plausible model of velocity path integration of head direction that can solve this problem using neuronal time constants to effect natural time delays, over which associations can be learned through associative Hebbian learning rules. The model comprises a linked continuous attractor network and competitive network. In simulation, we show that the same model is able to learn two different speeds of rotation when implemented with two different values for the time constant, and without the need to alter any other model parameters. The proposed model could be extended to path integration of place in the environment, and path integration of spatial view.

  12. Hebbian plasticity realigns grid cell activity with external sensory cues in continuous attractor models

    Directory of Open Access Journals (Sweden)

    Marcello eMulas

    2016-02-01

    Full Text Available After the discovery of grid cells, which are an essential component to understand how the mammalian brain encodes spatial information, three main classes of computational models were proposed in order to explain their working principles. Amongst them, the one based on continuous attractor networks (CAN, is promising in terms of biological plausibility and suitable for robotic applications. However, in its current formulation, it is unable to reproduce important electrophysiological findings and cannot be used to perform path integration for long periods of time. In fact, in absence of an appropriate resetting mechanism, the accumulation of errors overtime due to the noise intrinsic in velocity estimation and neural computation prevents CAN models to reproduce stable spatial grid patterns. In this paper, we propose an extension of the CAN model using Hebbian plasticity to anchor grid cell activity to environmental landmarks. To validate our approach we used as input to the neural simulations both artificial data and real data recorded from a robotic setup. The additional neural mechanism can not only anchor grid patterns to external sensory cues but also recall grid patterns generated in previously explored environments. These results might be instrumental for next generation bio-inspired robotic navigation algorithms that take advantage of neural computation in order to cope with complex and dynamic environments.

  13. Alteration of a motor learning rule under mirror-reversal transformation does not depend on the amplitude of visual error.

    Science.gov (United States)

    Kasuga, Shoko; Kurata, Makiko; Liu, Meigen; Ushiba, Junichi

    2015-05-01

    Human's sophisticated motor learning system paradoxically interferes with motor performance when visual information is mirror-reversed (MR), because normal movement error correction further aggravates the error. This error-increasing mechanism makes performing even a simple reaching task difficult, but is overcome by alterations in the error correction rule during the trials. To isolate factors that trigger learners to change the error correction rule, we manipulated the gain of visual angular errors when participants made arm-reaching movements with mirror-reversed visual feedback, and compared the rule alteration timing between groups with normal or reduced gain. Trial-by-trial changes in the visual angular error was tracked to explain the timing of the change in the error correction rule. Under both gain conditions, visual angular errors increased under the MR transformation, and suddenly decreased after 3-5 trials with increase. The increase became degressive at different amplitude between the two groups, nearly proportional to the visual gain. The findings suggest that the alteration of the error-correction rule is not dependent on the amplitude of visual angular errors, and possibly determined by the number of trials over which the errors increased or statistical property of the environment. The current results encourage future intensive studies focusing on the exact rule-change mechanism. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  14. Unsupervised clustering with spiking neurons by sparse temporal coding and multi-layer RBF networks

    NARCIS (Netherlands)

    S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)

    2000-01-01

    textabstractWe demonstrate that spiking neural networks encoding information in spike times are capable of computing and learning clusters from realistic data. We show how a spiking neural network based on spike-time coding and Hebbian learning can successfully perform unsupervised clustering on

  15. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models

    OpenAIRE

    Hanuschkin, A.; Ganguli, S.; Hahnloser, R. H. R.

    2013-01-01

    Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop...

  16. How to Learn English Grammar?

    Institute of Scientific and Technical Information of China (English)

    肖琳燃

    2017-01-01

    Grammar is an aspect of language about which learners have different opinions. Some learners are very interested in ifnding out or learning grammar rules and doing lots of grammar exercises. Others hate grammar and think it is the most boring part of learning a new language. Whatever opinion you have, however, you cannot escape from grammar; it is in every sentence you read or write, speak or hear. Grammar is simply the word for the rules that people follow when they use a language. We need those rules in the same way as we need the rules in a game. If there are no rules, or if everybody follows their own rules, the game would soon break down. It's the same with language; without rules we would not be able to communicate with other people. So you cannot escape from grammar, but the key question here is: what is the best way to learn grammar? You can learn the rules of a game by simply playing the game. You will certainly make mistakes; you may even get hurt. Eventually, however, you will know how to play. Of course, the rules of a language are very much more complicated than the rules of any game, but in fact this is exactly how you learned your own language. Nobody taught you the rules of your mother tongue as you were growing up but now you never make a grammar mistake.

  17. A forecast-based STDP rule suitable for neuromorphic implementation.

    Science.gov (United States)

    Davies, S; Galluppi, F; Rast, A D; Furber, S B

    2012-08-01

    Artificial neural networks increasingly involve spiking dynamics to permit greater computational efficiency. This becomes especially attractive for on-chip implementation using dedicated neuromorphic hardware. However, both spiking neural networks and neuromorphic hardware have historically found difficulties in implementing efficient, effective learning rules. The best-known spiking neural network learning paradigm is Spike Timing Dependent Plasticity (STDP) which adjusts the strength of a connection in response to the time difference between the pre- and post-synaptic spikes. Approaches that relate learning features to the membrane potential of the post-synaptic neuron have emerged as possible alternatives to the more common STDP rule, with various implementations and approximations. Here we use a new type of neuromorphic hardware, SpiNNaker, which represents the flexible "neuromimetic" architecture, to demonstrate a new approach to this problem. Based on the standard STDP algorithm with modifications and approximations, a new rule, called STDP TTS (Time-To-Spike) relates the membrane potential with the Long Term Potentiation (LTP) part of the basic STDP rule. Meanwhile, we use the standard STDP rule for the Long Term Depression (LTD) part of the algorithm. We show that on the basis of the membrane potential it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike. In our system these approximations allow efficient memory access, reducing the overall computational time and the memory bandwidth required. The improvements here presented are significant for real-time applications such as the ones for which the SpiNNaker system has been designed. We present simulation results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. On-chip results show that

  18. Learning Cultures

    DEFF Research Database (Denmark)

    Rasmussen, Lauge Baungaard

    1998-01-01

    the article present different concepts and modelsof learning. It discuss some strutural tendenciesof developing environmental management systemsand point out alternatives to increasing formalization of rules.......the article present different concepts and modelsof learning. It discuss some strutural tendenciesof developing environmental management systemsand point out alternatives to increasing formalization of rules....

  19. Effects of Memorization of Rule Statements on Acquisition and Retention of Rule-Governed Behavior in a Computer-Based Learning Task.

    Science.gov (United States)

    Towle, Nelson J.

    One hundred and twenty-four high school students were randomly assigned to four groups: 33 subjects memorized the rule statement before, 29 subjects memorized the rule statement during, and 30 subjects memorized the rule statement after instruction in rule application skills. Thirty-two subjects were not required to memorize rule statements.…

  20. Predictions of the spontaneous symmetry-breaking theory for visual code completeness and spatial scaling in single-cell learning rules.

    Science.gov (United States)

    Webber, C J

    2001-05-01

    This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.

  1. Drivers of Changes in Product Development Rules

    DEFF Research Database (Denmark)

    Christiansen, John K.; Varnes, Claus J.

    2015-01-01

    regimes. However, the analysis here indicates that there are different drivers, both internal and external, that cause companies to adopt new rules or modify their existing ones, such as changes in organizational structures, organizational conflicts, and changes in ownership or strategy. In addition......Purpose: - The purpose of this research is to investigate the drivers that induce companies to change their rules for managing product development. Most companies use a form of rule-based management approach, but surprisingly little is known about what makes companies change these rules...... 10 years based on three rounds of interviews with 40 managers. Findings: - Previous research has assumed that the dynamics of product development rules are based on internal learning processes, and that increasingly competent management will stimulate the implementation of newer and more complex rule...

  2. Phonological Concept Learning.

    Science.gov (United States)

    Moreton, Elliott; Pater, Joe; Pertsova, Katya

    2017-01-01

    Linguistic and non-linguistic pattern learning have been studied separately, but we argue for a comparative approach. Analogous inductive problems arise in phonological and visual pattern learning. Evidence from three experiments shows that human learners can solve them in analogous ways, and that human performance in both cases can be captured by the same models. We test GMECCS (Gradual Maximum Entropy with a Conjunctive Constraint Schema), an implementation of the Configural Cue Model (Gluck & Bower, ) in a Maximum Entropy phonotactic-learning framework (Goldwater & Johnson, ; Hayes & Wilson, ) with a single free parameter, against the alternative hypothesis that learners seek featurally simple algebraic rules ("rule-seeking"). We study the full typology of patterns introduced by Shepard, Hovland, and Jenkins () ("SHJ"), instantiated as both phonotactic patterns and visual analogs, using unsupervised training. Unlike SHJ, Experiments 1 and 2 found that both phonotactic and visual patterns that depended on fewer features could be more difficult than those that depended on more features, as predicted by GMECCS but not by rule-seeking. GMECCS also correctly predicted performance differences between stimulus subclasses within each pattern. A third experiment tried supervised training (which can facilitate rule-seeking in visual learning) to elicit simple rule-seeking phonotactic learning, but cue-based behavior persisted. We conclude that similar cue-based cognitive processes are available for phonological and visual concept learning, and hence that studying either kind of learning can lead to significant insights about the other. Copyright © 2015 Cognitive Science Society, Inc.

  3. Rapid motor learning in the translational vestibulo-ocular reflex

    Science.gov (United States)

    Zhou, Wu; Weldon, Patrick; Tang, Bingfeng; King, W. M.; Shelhamer, M. J. (Principal Investigator)

    2003-01-01

    Motor learning was induced in the translational vestibulo-ocular reflex (TVOR) when monkeys were repeatedly subjected to a brief (0.5 sec) head translation while they tried to maintain binocular fixation on a visual target for juice rewards. If the target was world-fixed, the initial eye speed of the TVOR gradually increased; if the target was head-fixed, the initial eye speed of the TVOR gradually decreased. The rate of learning acquisition was very rapid, with a time constant of approximately 100 trials, which was equivalent to or=1 d without any reinforcement, indicating induction of long-term synaptic plasticity. Although the learning generalized to targets with different viewing distances and to head translations with different accelerations, it was highly specific for the particular combination of head motion and evoked eye movement associated with the training. For example, it was specific to the modality of the stimulus (translation vs rotation) and the direction of the evoked eye movement in the training. Furthermore, when one eye was aligned with the heading direction so that it remained motionless during training, learning was not expressed in this eye, but only in the other nonaligned eye. These specificities show that the learning sites are neither in the sensory nor the motor limb of the reflex but in the sensory-motor transformation stage of the reflex. The dependence of the learning on both head motion and evoked eye movement suggests that Hebbian learning may be one of the underlying cellular mechanisms.

  4. Critical neural networks with short- and long-term plasticity

    Science.gov (United States)

    Michiels van Kessenich, L.; Luković, M.; de Arcangelis, L.; Herrmann, H. J.

    2018-03-01

    In recent years self organized critical neuronal models have provided insights regarding the origin of the experimentally observed avalanching behavior of neuronal systems. It has been shown that dynamical synapses, as a form of short-term plasticity, can cause critical neuronal dynamics. Whereas long-term plasticity, such as Hebbian or activity dependent plasticity, have a crucial role in shaping the network structure and endowing neural systems with learning abilities. In this work we provide a model which combines both plasticity mechanisms, acting on two different time scales. The measured avalanche statistics are compatible with experimental results for both the avalanche size and duration distribution with biologically observed percentages of inhibitory neurons. The time series of neuronal activity exhibits temporal bursts leading to 1 /f decay in the power spectrum. The presence of long-term plasticity gives the system the ability to learn binary rules such as xor, providing the foundation of future research on more complicated tasks such as pattern recognition.

  5. Generating Concise Rules for Human Motion Retrieval

    Science.gov (United States)

    Mukai, Tomohiko; Wakisaka, Ken-Ichi; Kuriyama, Shigeru

    This paper proposes a method for retrieving human motion data with concise retrieval rules based on the spatio-temporal features of motion appearance. Our method first converts motion clip into a form of clausal language that represents geometrical relations between body parts and their temporal relationship. A retrieval rule is then learned from the set of manually classified examples using inductive logic programming (ILP). ILP automatically discovers the essential rule in the same clausal form with a user-defined hypothesis-testing procedure. All motions are indexed using this clausal language, and the desired clips are retrieved by subsequence matching using the rule. Such rule-based retrieval offers reasonable performance and the rule can be intuitively edited in the same language form. Consequently, our method enables efficient and flexible search from a large dataset with simple query language.

  6. Class association rules mining from students’ test data (Abstract)

    NARCIS (Netherlands)

    Romero, C.; Ventura, S.; Vasilyeva, E.; Pechenizkiy, M.; Baker, de R.S.J.; Merceron, A.; Pavlik Jr., P.I.

    2010-01-01

    In this paper we propose the use of a special type of association rules mining for discovering interesting relationships from the students’ test data collected in our case with Moodle learning management system (LMS). Particularly, we apply Class Association Rule (CAR) mining to different data

  7. Learning the Rules of the Game

    Science.gov (United States)

    Smith, Donald A.

    2018-01-01

    Games have often been used in the classroom to teach physics ideas and concepts, but there has been less published on games that can be used to teach scientific thinking. D. Maloney and M. Masters describe an activity in which students attempt to infer rules to a game from a history of moves, but the students do not actually play the game. Giving…

  8. Sensorimotor learning biases choice behavior: a learning neural field model for decision making.

    Directory of Open Access Journals (Sweden)

    Christian Klaes

    Full Text Available According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action

  9. Rules of (Student) Engagement

    Science.gov (United States)

    Buskist, William; Busler, Jessica N.; Kirby, Lauren A. J.

    2018-01-01

    Teachers often think of student engagement in terms of hands-on activities that get students involved in their courses. They seldom consider the larger aspects of the teaching--learning environment that often influence the extent to which students are willing to become engaged in their coursework. In this chapter, we describe five "rules of…

  10. A fuzzy controller with a robust learning function

    International Nuclear Information System (INIS)

    Tanji, Jun-ichi; Kinoshita, Mitsuo

    1987-01-01

    A self-organizing fuzzy controller is able to use linguistic decision rules of control strategy and has a strong adaptive property by virture of its rule learning function. While a simple linguistic description of the learning algorithm first introduced by Procyk, et al. has much flexibility for applications to a wide range of different processes, its detailed formulation, in particular with control stability and learning process convergence, is not clear. In this paper, we describe the formulation of an analytical basis for a self-organizing fuzzy controller by using a method of model reference adaptive control systems (MRACS) for which stability in the adaptive loop is theoretically proven. A detailed formulation is described regarding performance evaluation and rule modification in the rule learning process of the controller. Furthermore, an improved learning algorithm using adaptive rule is proposed. An adaptive rule gives a modification coefficient for a rule change estimating the effect of disturbance occurrence in performance evaluation. The effect of introducing an adaptive rule to improve the learning convergency is described by using a simple iterative formulation. Simulation tests are presented for an application of the proposed self-organizing fuzzy controller to the pressure control system in a Boiling Water Reactor (BWR) plant. Results with the tests confirm the improved learning algorithm has strong convergent properties, even in a very disturbed environment. (author)

  11. Abstract rule learning in 11- and 14-month-old infants.

    Science.gov (United States)

    Koulaguina, Elena; Shi, Rushen

    2013-02-01

    This study tests the hypothesis that distributional information can guide infants in the generalization of word order movement rules at the initial stage of language acquisition. Participants were 11- and 14-month-old infants. Stimuli were sentences in Russian, a language that was unknown to our infants. During training the word order of each sentence was transformed following a consistent pattern (e.g., ABC-BAC). During the test phase infants heard novel sentences that respected the trained rule and ones that violated the trained rule (i.e., a different transformation such as ABC-ACB). Stimuli words had highly variable phonological and morphological shapes. The cue available was the positional information of words and their non-adjacent relations across sentences. We found that 14-month-olds, but not 11-month-olds, showed evidence of abstract rule generalization to novel instances. The implications of this finding to early syntactic acquisition are discussed.

  12. Learning tinnitus

    Science.gov (United States)

    van Hemmen, J. Leo

    Tinnitus, implying the perception of sound without the presence of any acoustical stimulus, is a chronic and serious problem for about 2% of the human population. In many cases, tinnitus is a pitch-like sensation associated with a hearing loss that confines the tinnitus frequency to an interval of the tonotopic axis. Even in patients with a normal audiogram the presence of tinnitus may be associated with damage of hair-cell function in this interval. It has been suggested that homeostatic regulation and, hence, increase of activity leads to the emergence of tinnitus. For patients with hearing loss, we present spike-timing-dependent Hebbian plasticity (STDP) in conjunction with homeostasis as a mechanism for ``learning'' tinnitus in a realistic neuronal network with tonotopically arranged synaptic excitation and inhibition. In so doing we use both dynamical scaling of the synaptic strengths and altering the resting potential of the cells. The corresponding simulations are robust to parameter changes. Understanding the mechanisms of tinnitus induction, such as here, may help improving therapy. Work done in collaboration with Julie Goulet and Michael Schneider. JLvH has been supported partially by BCCN - Munich.

  13. The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding

    Directory of Open Access Journals (Sweden)

    Yuwei Cui

    2017-11-01

    Full Text Available Hierarchical temporal memory (HTM provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP. The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.

  14. The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff

    2017-01-01

    Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.

  15. Incremental learning of perceptual and conceptual representations and the puzzle of neural repetition suppression.

    Science.gov (United States)

    Gotts, Stephen J

    2016-08-01

    Incremental learning models of long-term perceptual and conceptual knowledge hold that neural representations are gradually acquired over many individual experiences via Hebbian-like activity-dependent synaptic plasticity across cortical connections of the brain. In such models, variation in task relevance of information, anatomic constraints, and the statistics of sensory inputs and motor outputs lead to qualitative alterations in the nature of representations that are acquired. Here, the proposal that behavioral repetition priming and neural repetition suppression effects are empirical markers of incremental learning in the cortex is discussed, and research results that both support and challenge this position are reviewed. Discussion is focused on a recent fMRI-adaptation study from our laboratory that shows decoupling of experience-dependent changes in neural tuning, priming, and repetition suppression, with representational changes that appear to work counter to the explicit task demands. Finally, critical experiments that may help to clarify and resolve current challenges are outlined.

  16. Sleep promotes the extraction of grammatical rules.

    Directory of Open Access Journals (Sweden)

    Ingrid L C Nieuwenhuis

    Full Text Available Grammar acquisition is a high level cognitive function that requires the extraction of complex rules. While it has been proposed that offline time might benefit this type of rule extraction, this remains to be tested. Here, we addressed this question using an artificial grammar learning paradigm. During a short-term memory cover task, eighty-one human participants were exposed to letter sequences generated according to an unknown artificial grammar. Following a time delay of 15 min, 12 h (wake or sleep or 24 h, participants classified novel test sequences as Grammatical or Non-Grammatical. Previous behavioral and functional neuroimaging work has shown that classification can be guided by two distinct underlying processes: (1 the holistic abstraction of the underlying grammar rules and (2 the detection of sequence chunks that appear at varying frequencies during exposure. Here, we show that classification performance improved after sleep. Moreover, this improvement was due to an enhancement of rule abstraction, while the effect of chunk frequency was unaltered by sleep. These findings suggest that sleep plays a critical role in extracting complex structure from separate but related items during integrative memory processing. Our findings stress the importance of alternating periods of learning with sleep in settings in which complex information must be acquired.

  17. Rule of Thumb and Dynamic Programming

    NARCIS (Netherlands)

    Lettau, M.; Uhlig, H.F.H.V.S.

    1995-01-01

    This paper studies the relationships between learning about rules of thumb (represented by classifier systems) and dynamic programming. Building on a result about Markovian stochastic approximation algorithms, we characterize all decision functions that can be asymptotically obtained through

  18. Learning unlearnable problems with perceptrons

    Science.gov (United States)

    Watkin, Timothy L. H.; Rau, Albrecht

    1992-03-01

    We study how well perceptrons learn to solve problems for which there is no perfect answer (the usual case), taking as examples a rule with a threshold, a rule in which the answer is not a monotonic function of the overlap between question and teacher, and a rule with many teachers (a ``hard'' unlearnable problem). In general there is a tendency for first-order transitions, even using spherical perceptrons, as networks compromise between conflicting requirements. Some existing learning schemes fail completely-occasionally even finding the worst possible solution; others are more successful. High-temperature learning seems more satisfactory than zero-temperature algorithms and avoids ``overlearning'' and ``overfitting,'' but care must be taken to avoid ``trapping'' in spurious free-energy minima. For some rules examples alone are not enough to learn from, and some prior information is required.

  19. Word learning emerges from the interaction of online referent selection and slow associative learning

    Science.gov (United States)

    McMurray, Bob; Horst, Jessica S.; Samuelson, Larissa K.

    2013-01-01

    Classic approaches to word learning emphasize the problem of referential ambiguity: in any naming situation the referent of a novel word must be selected from many possible objects, properties, actions, etc. To solve this problem, researchers have posited numerous constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative model in which referent selection is an online process that is independent of long-term learning. This two timescale approach creates significant power in the developing system. We illustrate this with a dynamic associative model in which referent selection is simulated as dynamic competition between competing referents, and learning is simulated using associative (Hebbian) learning. This model can account for a range of findings including the delay in expressive vocabulary relative to receptive vocabulary, learning under high degrees of referential ambiguity using cross-situational statistics, accelerating (vocabulary explosion) and decelerating (power-law) learning rates, fast-mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between individual differences in speed of processing and learning. Five theoretical points are illustrated. 1) Word learning does not require specialized processes – general association learning buttressed by dynamic competition can account for much of the literature. 2) The processes of recognizing familiar words are not different than those that support novel words (e.g., fast-mapping). 3) Online competition may allow the network (or child) to leverage information available in the task to augment performance or behavior despite what might be relatively slow learning or poor representations. 4) Even associative learning is more complex than previously thought – a major contributor to performance is the pruning of incorrect associations

  20. Contributions of Lateral and Orbital Frontal Regions to Abstract Rule Acquisition and Reversal in Monkeys

    Science.gov (United States)

    La Camera, Giancarlo; Bouret, Sebastien; Richmond, Barry J.

    2018-01-01

    The ability to learn and follow abstract rules relies on intact prefrontal regions including the lateral prefrontal cortex (LPFC) and the orbitofrontal cortex (OFC). Here, we investigate the specific roles of these brain regions in learning rules that depend critically on the formation of abstract concepts as opposed to simpler input-output associations. To this aim, we tested monkeys with bilateral removals of either LPFC or OFC on a rapidly learned task requiring the formation of the abstract concept of same vs. different. While monkeys with OFC removals were significantly slower than controls at both acquiring and reversing the concept-based rule, monkeys with LPFC removals were not impaired in acquiring the task, but were significantly slower at rule reversal. Neither group was impaired in the acquisition or reversal of a delayed visual cue-outcome association task without a concept-based rule. These results suggest that OFC is essential for the implementation of a concept-based rule, whereas LPFC seems essential for its modification once established. PMID:29615854

  1. Spike-timing dependent plasticity and the cognitive map.

    Science.gov (United States)

    Bush, Daniel; Philippides, Andrew; Husbands, Phil; O'Shea, Michael

    2010-01-01

    Since the discovery of place cells - single pyramidal neurons that encode spatial location - it has been hypothesized that the hippocampus may act as a cognitive map of known environments. This putative function has been extensively modeled using auto-associative networks, which utilize rate-coded synaptic plasticity rules in order to generate strong bi-directional connections between concurrently active place cells that encode for neighboring place fields. However, empirical studies using hippocampal cultures have demonstrated that the magnitude and direction of changes in synaptic strength can also be dictated by the relative timing of pre- and post-synaptic firing according to a spike-timing dependent plasticity (STDP) rule. Furthermore, electrophysiology studies have identified persistent "theta-coded" temporal correlations in place cell activity in vivo, characterized by phase precession of firing as the corresponding place field is traversed. It is not yet clear if STDP and theta-coded neural dynamics are compatible with cognitive map theory and previous rate-coded models of spatial learning in the hippocampus. Here, we demonstrate that an STDP rule based on empirical data obtained from the hippocampus can mediate rate-coded Hebbian learning when pre- and post-synaptic activity is stochastic and has no persistent sequence bias. We subsequently demonstrate that a spiking recurrent neural network that utilizes this STDP rule, alongside theta-coded neural activity, allows the rapid development of a cognitive map during directed or random exploration of an environment of overlapping place fields. Hence, we establish that STDP and phase precession are compatible with rate-coded models of cognitive map development.

  2. Spike-timing dependent plasticity and the cognitive map

    Directory of Open Access Journals (Sweden)

    Daniel eBush

    2010-10-01

    Full Text Available Since the discovery of place cells – single pyramidal neurons that encode spatial location – it has been hypothesised that the hippocampus may act as a cognitive map of known environments. This putative function has been extensively modelled using auto-associative networks, which utilise rate-coded synaptic plasticity rules in order to generate strong bi-directional connections between concurrently active place cells that encode for neighbouring place fields. However, empirical studies using hippocampal cultures have demonstrated that the magnitude and direction of changes in synaptic strength can also be dictated by the relative timing of pre- and post- synaptic firing according to a spike-timing dependent plasticity (STDP rule. Furthermore, electrophysiology studies have identified persistent ‘theta-coded’ temporal correlations in place cell activity in vivo, characterised by phase precession of firing as the corresponding place field is traversed. It is not yet clear if STDP and theta-coded neural dynamics are compatible with cognitive map theory and previous rate-coded models of spatial learning in the hippocampus. Here, we demonstrate that an STDP rule based on empirical data obtained from the hippocampus can mediate rate-coded Hebbian learning when pre- and post- synaptic activity is stochastic and has no persistent sequence bias. We subsequently demonstrate that a spiking recurrent neural network that utilises this STDP rule, alongside theta-coded neural activity, allows the rapid development of a cognitive map during directed or random exploration of an environment of overlapping place fields. Hence, we establish that STDP and phase precession are compatible with rate-coded models of cognitive map development.

  3. Topological self-organization and prediction learning support both action and lexical chains in the brain.

    Science.gov (United States)

    Chersi, Fabian; Ferro, Marcello; Pezzulo, Giovanni; Pirrelli, Vito

    2014-07-01

    A growing body of evidence in cognitive psychology and neuroscience suggests a deep interconnection between sensory-motor and language systems in the brain. Based on recent neurophysiological findings on the anatomo-functional organization of the fronto-parietal network, we present a computational model showing that language processing may have reused or co-developed organizing principles, functionality, and learning mechanisms typical of premotor circuit. The proposed model combines principles of Hebbian topological self-organization and prediction learning. Trained on sequences of either motor or linguistic units, the network develops independent neuronal chains, formed by dedicated nodes encoding only context-specific stimuli. Moreover, neurons responding to the same stimulus or class of stimuli tend to cluster together to form topologically connected areas similar to those observed in the brain cortex. Simulations support a unitary explanatory framework reconciling neurophysiological motor data with established behavioral evidence on lexical acquisition, access, and recall. Copyright © 2014 Cognitive Science Society, Inc.

  4. Comparison of Heuristics for Inhibitory Rule Optimization

    KAUST Repository

    Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail

    2014-01-01

    Friedman test with Nemenyi post-hoc are used to compare the greedy algorithms statistically against each other for length and coverage. The experiments are carried out on real datasets from UCI Machine Learning Repository. For leading heuristics, the constructed rules are compared with optimal ones obtained based on dynamic programming approach. The results seem to be promising for the best heuristics: the average relative difference between length (coverage) of constructed and optimal rules is at most 2.27% (7%, respectively). Furthermore, the quality of classifiers based on sets of inhibitory rules constructed by the considered heuristics are compared against each other, and the results show that the three best heuristics from the point of view classification accuracy coincides with the three well-performed heuristics from the point of view of rule length minimization.

  5. Ellipsoidal fuzzy learning for smart car platoons

    Science.gov (United States)

    Dickerson, Julie A.; Kosko, Bart

    1993-12-01

    A neural-fuzzy system combined supervised and unsupervised learning to find and tune the fuzzy-rules. An additive fuzzy system approximates a function by covering its graph with fuzzy rules. A fuzzy rule patch can take the form of an ellipsoid in the input-output space. Unsupervised competitive learning found the statistics of data clusters. The covariance matrix of each synaptic quantization vector defined on ellipsoid centered at the centroid of the data cluster. Tightly clustered data gave smaller ellipsoids or more certain rules. Sparse data gave larger ellipsoids or less certain rules. Supervised learning tuned the ellipsoids to improve the approximation. The supervised neural system used gradient descent to find the ellipsoidal fuzzy patches. It locally minimized the mean-squared error of the fuzzy approximation. Hybrid ellipsoidal learning estimated the control surface for a smart car controller.

  6. Explanation-based learning in infancy.

    Science.gov (United States)

    Baillargeon, Renée; DeJong, Gerald F

    2017-10-01

    In explanation-based learning (EBL), domain knowledge is leveraged in order to learn general rules from few examples. An explanation is constructed for initial exemplars and is then generalized into a candidate rule that uses only the relevant features specified in the explanation; if the rule proves accurate for a few additional exemplars, it is adopted. EBL is thus highly efficient because it combines both analytic and empirical evidence. EBL has been proposed as one of the mechanisms that help infants acquire and revise their physical rules. To evaluate this proposal, 11- and 12-month-olds (n = 260) were taught to replace their current support rule (that an object is stable when half or more of its bottom surface is supported) with a more sophisticated rule (that an object is stable when half or more of the entire object is supported). Infants saw teaching events in which asymmetrical objects were placed on a base, followed by static test displays involving a novel asymmetrical object and a novel base. When the teaching events were designed to facilitate EBL, infants learned the new rule with as few as two (12-month-olds) or three (11-month-olds) exemplars. When the teaching events were designed to impede EBL, however, infants failed to learn the rule. Together, these results demonstrate that even infants, with their limited knowledge about the world, benefit from the knowledge-based approach of EBL.

  7. Increasing the thermal stability of cellulase C using rules learned from thermophilic proteins: a pilot study.

    Science.gov (United States)

    Németh, Attila; Kamondi, Szilárd; Szilágyi, András; Magyar, Csaba; Kovári, Zoltán; Závodszky, Péter

    2002-05-02

    Some structural features underlying the increased thermostability of enzymes from thermophilic organisms relative to their homologues from mesophiles are known from earlier studies. We used cellulase C from Clostridium thermocellum to test whether thermostability can be increased by mutations designed using rules learned from thermophilic proteins. Cellulase C has a TIM barrel fold with an additional helical subdomain. We designed and produced a number of mutants with the aim to increase its thermostability. Five mutants were designed to create new electrostatic interactions. They all retained catalytic activity but exhibited decreased thermostability relative to the wild-type enzyme. Here, the stabilizing contributions are obviously smaller than the destabilization caused by the introduction of the new side chains. In another mutant, the small helical subdomain was deleted. This mutant lost activity but its melting point was only 3 degrees C lower than that of the wild-type enzyme, which suggests that the subdomain is an independent folding unit and is important for catalytic function. A double mutant was designed to introduce a new disulfide bridge into the enzyme. This mutant is active and has an increased stability (deltaT(m)=3 degrees C, delta(deltaG(u))=1.73 kcal/mol) relative to the wild-type enzyme. Reduction of the disulfide bridge results in destabilization and an altered thermal denaturation behavior. We conclude that rules learned from thermophilic proteins cannot be used in a straightforward way to increase the thermostability of a protein. Creating a crosslink such as a disulfide bond is a relatively sure-fire method but the stabilization may be smaller than calculated due to coupled destabilizing effects.

  8. Comparison of Heuristics for Inhibitory Rule Optimization

    KAUST Repository

    Alsolami, Fawaz

    2014-09-13

    Knowledge representation and extraction are very important tasks in data mining. In this work, we proposed a variety of rule-based greedy algorithms that able to obtain knowledge contained in a given dataset as a series of inhibitory rules containing an expression “attribute ≠ value” on the right-hand side. The main goal of this paper is to determine based on rule characteristics, rule length and coverage, whether the proposed rule heuristics are statistically significantly different or not; if so, we aim to identify the best performing rule heuristics for minimization of rule length and maximization of rule coverage. Friedman test with Nemenyi post-hoc are used to compare the greedy algorithms statistically against each other for length and coverage. The experiments are carried out on real datasets from UCI Machine Learning Repository. For leading heuristics, the constructed rules are compared with optimal ones obtained based on dynamic programming approach. The results seem to be promising for the best heuristics: the average relative difference between length (coverage) of constructed and optimal rules is at most 2.27% (7%, respectively). Furthermore, the quality of classifiers based on sets of inhibitory rules constructed by the considered heuristics are compared against each other, and the results show that the three best heuristics from the point of view classification accuracy coincides with the three well-performed heuristics from the point of view of rule length minimization.

  9. Comparison of rule induction, decision trees and formal concept analysis approaches for classification

    Science.gov (United States)

    Kotelnikov, E. V.; Milov, V. R.

    2018-05-01

    Rule-based learning algorithms have higher transparency and easiness to interpret in comparison with neural networks and deep learning algorithms. These properties make it possible to effectively use such algorithms to solve descriptive tasks of data mining. The choice of an algorithm depends also on its ability to solve predictive tasks. The article compares the quality of the solution of the problems with binary and multiclass classification based on the experiments with six datasets from the UCI Machine Learning Repository. The authors investigate three algorithms: Ripper (rule induction), C4.5 (decision trees), In-Close (formal concept analysis). The results of the experiments show that In-Close demonstrates the best quality of classification in comparison with Ripper and C4.5, however the latter two generate more compact rule sets.

  10. Who Knows? Metacognitive Social Learning Strategies.

    Science.gov (United States)

    Heyes, Cecilia

    2016-03-01

    To make good use of learning from others (social learning), we need to learn from the right others; from agents who know better than we do. Research on social learning strategies (SLSs) has identified rules that focus social learning on the right agents, and has shown that the behaviour of many animals conforms to these rules. However, it has not asked what the rules are made of, that is, about the cognitive processes implementing SLSs. Here, I suggest that most SLSs depend on domain-general, sensorimotor processes. However, some SLSs have the characteristics tacitly ascribed to all of them. These metacognitive SLSs represent 'who knows' in a conscious, reportable way, and have the power to promote cultural evolution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Toward A Dual-Learning Systems Model of Speech Category Learning

    Directory of Open Access Journals (Sweden)

    Bharath eChandrasekaran

    2014-07-01

    Full Text Available More than two decades of work in vision posits the existence of dual-learning systems of category learning. The reflective system uses working memory to develop and test rules for classifying in an explicit fashion, while the reflexive system operates by implicitly associating perception with actions that lead to reinforcement. Dual-learning systems models hypothesize that in learning natural categories, learners initially use the reflective system and, with practice, transfer control to the reflexive system. The role of reflective and reflexive systems in auditory category learning and more specifically in speech category learning has not been systematically examined. In this article we describe a neurobiologically-constrained dual-learning systems theoretical framework that is currently being developed in speech category learning and review recent applications of this framework. Using behavioral and computational modeling approaches, we provide evidence that speech category learning is predominantly mediated by the reflexive learning system. In one application, we explore the effects of normal aging on non-speech and speech category learning. We find an age related deficit in reflective-optimal but not reflexive-optimal auditory category learning. Prominently, we find a large age-related deficit in speech learning. The computational modeling suggests that older adults are less likely to transition from simple, reflective, uni-dimensional rules to more complex, reflexive, multi-dimensional rules. In a second application we summarize a recent study examining auditory category learning in individuals with elevated depressive symptoms. We find a deficit in reflective-optimal and an enhancement in reflexive-optimal auditory category learning. Interestingly, individuals with elevated depressive symptoms also show an advantage in learning speech categories. We end with a brief summary and description of a number of future directions.

  12. Targeted training of the decision rule benefits rule-guided behavior in Parkinson's disease.

    Science.gov (United States)

    Ell, Shawn W

    2013-12-01

    The impact of Parkinson's disease (PD) on rule-guided behavior has received considerable attention in cognitive neuroscience. The majority of research has used PD as a model of dysfunction in frontostriatal networks, but very few attempts have been made to investigate the possibility of adapting common experimental techniques in an effort to identify the conditions that are most likely to facilitate successful performance. The present study investigated a targeted training paradigm designed to facilitate rule learning and application using rule-based categorization as a model task. Participants received targeted training in which there was no selective-attention demand (i.e., stimuli varied along a single, relevant dimension) or nontargeted training in which there was selective-attention demand (i.e., stimuli varied along a relevant dimension as well as an irrelevant dimension). Following training, all participants were tested on a rule-based task with selective-attention demand. During the test phase, PD patients who received targeted training performed similarly to control participants and outperformed patients who did not receive targeted training. As a preliminary test of the generalizability of the benefit of targeted training, a subset of the PD patients were tested on the Wisconsin card sorting task (WCST). PD patients who received targeted training outperformed PD patients who did not receive targeted training on several WCST performance measures. These data further characterize the contribution of frontostriatal circuitry to rule-guided behavior. Importantly, these data also suggest that PD patient impairment, on selective-attention-demanding tasks of rule-guided behavior, is not inevitable and highlight the potential benefit of targeted training.

  13. Methods for reducing interference in the Complementary Learning Systems model: oscillating inhibition and autonomous memory rehearsal.

    Science.gov (United States)

    Norman, Kenneth A; Newman, Ehren L; Perotte, Adler J

    2005-11-01

    The stability-plasticity problem (i.e. how the brain incorporates new information into its model of the world, while at the same time preserving existing knowledge) has been at the forefront of computational memory research for several decades. In this paper, we critically evaluate how well the Complementary Learning Systems theory of hippocampo-cortical interactions addresses the stability-plasticity problem. We identify two major challenges for the model: Finding a learning algorithm for cortex and hippocampus that enacts selective strengthening of weak memories, and selective punishment of competing memories; and preventing catastrophic forgetting in the case of non-stationary environments (i.e. when items are temporarily removed from the training set). We then discuss potential solutions to these problems: First, we describe a recently developed learning algorithm that leverages neural oscillations to find weak parts of memories (so they can be strengthened) and strong competitors (so they can be punished), and we show how this algorithm outperforms other learning algorithms (CPCA Hebbian learning and Leabra at memorizing overlapping patterns. Second, we describe how autonomous re-activation of memories (separately in cortex and hippocampus) during REM sleep, coupled with the oscillating learning algorithm, can reduce the rate of forgetting of input patterns that are no longer present in the environment. We then present a simple demonstration of how this process can prevent catastrophic interference in an AB-AC learning paradigm.

  14. Storage capacity of the Tilinglike Learning Algorithm

    International Nuclear Information System (INIS)

    Buhot, Arnaud; Gordon, Mirta B.

    2001-01-01

    The storage capacity of an incremental learning algorithm for the parity machine, the Tilinglike Learning Algorithm, is analytically determined in the limit of a large number of hidden perceptrons. Different learning rules for the simple perceptron are investigated. The usual Gardner-Derrida rule leads to a storage capacity close to the upper bound, which is independent of the learning algorithm considered

  15. A neuromorphic architecture for object recognition and motion anticipation using burst-STDP.

    Directory of Open Access Journals (Sweden)

    Andrew Nere

    Full Text Available In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP. STDP is responsible for the strengthening (or weakening of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips.

  16. Decision rule classifiers for multi-label decision tables

    KAUST Repository

    Alsolami, Fawaz

    2014-01-01

    Recently, multi-label classification problem has received significant attention in the research community. This paper is devoted to study the effect of the considered rule heuristic parameters on the generalization error. The results of experiments for decision tables from UCI Machine Learning Repository and KEEL Repository show that rule heuristics taking into account both coverage and uncertainty perform better than the strategies taking into account a single criterion. © 2014 Springer International Publishing.

  17. Rule-governed behavior and behavioral anthropology.

    Science.gov (United States)

    Malott, R W

    1988-01-01

    According to cultural materialism, cultural practices result from the materialistic outcomes of those practices, not from sociobiological, mentalistic, or mystical predispositions (e.g., Hindus worship cows because, in the long run, that worship results in more food, not less food). However, according to behavior analysis, such materialistic outcomes do not reinforce or punish the cultural practices, because such outcomes are too delayed, too improbable, or individually too small to directly reinforce or punish the cultural practices (e.g., the food increase is too delayed to reinforce the cow worship). Therefore, the molar, materialistic contingencies need the support of molecular, behavioral contingencies. And according to the present theory of rule-governed behavior, the statement of rules describing those molar, materialistic contingencies can establish the needed molecular contingencies. Given the proper behavioral history, such rule statements combine with noncompliance to produce a learned aversive condition (often labeled fear, anxiety, or guilt). The termination of this aversive condition reinforces compliance, just as its presentation punishes noncompliance (e.g., the termination of guilt reinforces the tending to a sick cow). In addition, supernatural rules often supplement these materialistic rules. Furthermore, the production of both materialistic and supernatural rules needs cultural designers who understand the molar, materialistic contingencies.

  18. Machine learning-, rule- and pharmacophore-based classification on the inhibition of P-glycoprotein and NorA.

    Science.gov (United States)

    Ngo, T-D; Tran, T-D; Le, M-T; Thai, K-M

    2016-09-01

    The efflux pumps P-glycoprotein (P-gp) in humans and NorA in Staphylococcus aureus are of great interest for medicinal chemists because of their important roles in multidrug resistance (MDR). The high polyspecificity as well as the unavailability of high-resolution X-ray crystal structures of these transmembrane proteins lead us to combining ligand-based approaches, which in the case of this study were machine learning, perceptual mapping and pharmacophore modelling. For P-gp inhibitory activity, individual models were developed using different machine learning algorithms and subsequently combined into an ensemble model which showed a good discrimination between inhibitors and noninhibitors (acctrain-diverse = 84%; accinternal-test = 92% and accexternal-test = 100%). For ligand promiscuity between P-gp and NorA, perceptual maps and pharmacophore models were generated for the detection of rules and features. Based on these in silico tools, hit compounds for reversing MDR were discovered from the in-house and DrugBank databases through virtual screening in an attempt to restore drug sensitivity in cancer cells and bacteria.

  19. Learning and coding in biological neural networks

    Science.gov (United States)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  20. When more is less: Feedback effects in perceptual category learning

    Science.gov (United States)

    Maddox, W. Todd; Love, Bradley C.; Glass, Brian D.; Filoteo, J. Vincent

    2008-01-01

    Rule-based and information-integration category learning were compared under minimal and full feedback conditions. Rule-based category structures are those for which the optimal rule is verbalizable. Information-integration category structures are those for which the optimal rule is not verbalizable. With minimal feedback subjects are told whether their response was correct or incorrect, but are not informed of the correct category assignment. With full feedback subjects are informed of the correctness of their response and are also informed of the correct category assignment. An examination of the distinct neural circuits that subserve rule-based and information-integration category learning leads to the counterintuitive prediction that full feedback should facilitate rule-based learning but should also hinder information-integration learning. This prediction was supported in the experiment reported below. The implications of these results for theories of learning are discussed. PMID:18455155

  1. Learning in Artificial Neural Systems

    Science.gov (United States)

    Matheus, Christopher J.; Hohensee, William E.

    1987-01-01

    This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined.

  2. Boltzmann learning of parameters in cellular neural networks

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    1992-01-01

    The use of Bayesian methods to design cellular neural networks for signal processing tasks and the Boltzmann machine learning rule for parameter estimation is discussed. The learning rule can be used for models with hidden units, or for completely unsupervised learning. The latter is exemplified...

  3. Fuzzy-logic based learning style prediction in e-learning using web ...

    Indian Academy of Sciences (India)

    tion, especially in web environments and proposes to use Fuzzy rules to handle the uncertainty in .... learning in safe and supportive environment ... working of the proposed Fuzzy-logic based learning style prediction in e-learning. Section 4.

  4. Learning Expressive Linkage Rules for Entity Matching using Genetic Programming

    OpenAIRE

    Isele, Robert

    2013-01-01

    A central problem in data integration and data cleansing is to identify pairs of entities in data sets that describe the same real-world object. Many existing methods for matching entities rely on explicit linkage rules, which specify how two entities are compared for equivalence. Unfortunately, writing accurate linkage rules by hand is a non-trivial problem that requires detailed knowledge of the involved data sets. Another important issue is the efficient execution of link...

  5. Feedforward inhibition and synaptic scaling--two sides of the same coin?

    Science.gov (United States)

    Keck, Christian; Savin, Cristina; Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.

  6. Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?

    Science.gov (United States)

    Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610

  7. Feedforward inhibition and synaptic scaling--two sides of the same coin?

    Directory of Open Access Journals (Sweden)

    Christian Keck

    Full Text Available Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.

  8. Stochastic Variational Learning in Recurrent Spiking Networks

    Directory of Open Access Journals (Sweden)

    Danilo eJimenez Rezende

    2014-04-01

    Full Text Available The ability to learn and perform statistical inference with biologically plausible recurrent network of spiking neurons is an important step towards understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators conveying information about ``novelty on a statistically rigorous ground.Simulations show that our model is able to learn bothstationary and non-stationary patterns of spike trains.We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.

  9. Stochastic variational learning in recurrent spiking networks.

    Science.gov (United States)

    Jimenez Rezende, Danilo; Gerstner, Wulfram

    2014-01-01

    The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about "novelty" on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.

  10. Perceptron learning rule derived from spike-frequency adaptation and spike-time-dependent plasticity.

    Science.gov (United States)

    D'Souza, Prashanth; Liu, Shih-Chii; Hahnloser, Richard H R

    2010-03-09

    It is widely believed that sensory and motor processing in the brain is based on simple computational primitives rooted in cellular and synaptic physiology. However, many gaps remain in our understanding of the connections between neural computations and biophysical properties of neurons. Here, we show that synaptic spike-time-dependent plasticity (STDP) combined with spike-frequency adaptation (SFA) in a single neuron together approximate the well-known perceptron learning rule. Our calculations and integrate-and-fire simulations reveal that delayed inputs to a neuron endowed with STDP and SFA precisely instruct neural responses to earlier arriving inputs. We demonstrate this mechanism on a developmental example of auditory map formation guided by visual inputs, as observed in the external nucleus of the inferior colliculus (ICX) of barn owls. The interplay of SFA and STDP in model ICX neurons precisely transfers the tuning curve from the visual modality onto the auditory modality, demonstrating a useful computation for multimodal and sensory-guided processing.

  11. A framework for plasticity implementation on the SpiNNaker neural architecture.

    Science.gov (United States)

    Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A; Furber, Steve B; Benosman, Ryad B

    2014-01-01

    Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.

  12. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  13. Optimization of β-decision rules relative to number of misclassifications

    KAUST Repository

    Zielosko, Beata

    2012-01-01

    In the paper, we present an algorithm for optimization of approximate decision rules relative to the number of misclassifications. The considered algorithm is based on extensions of dynamic programming and constructs a directed acyclic graph Δ β (T). Based on this graph we can describe the whole set of so-called irredundant β-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 Springer-Verlag.

  14. The effect of negative performance stereotypes on learning.

    Science.gov (United States)

    Rydell, Robert J; Rydell, Michael T; Boucher, Kathryn L

    2010-12-01

    Stereotype threat (ST) research has focused exclusively on how negative group stereotypes reduce performance. The present work examines if pejorative stereotypes about women in math inhibit their ability to learn the mathematical rules and operations necessary to solve math problems. In Experiment 1, women experiencing ST had difficulty encoding math-related information into memory and, therefore, learned fewer mathematical rules and showed poorer math performance than did controls. In Experiment 2, women experiencing ST while learning modular arithmetic (MA) performed more poorly than did controls on easy MA problems; this effect was due to reduced learning of the mathematical operations underlying MA. In Experiment 3, ST reduced women's, but not men's, ability to learn abstract mathematical rules and to transfer these rules to a second, isomorphic task. This work provides the first evidence that negative stereotypes about women in math reduce their level of mathematical learning and demonstrates that reduced learning due to stereotype threat can lead to poorer performance in negatively stereotyped domains. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  15. Bidirectional Hebbian Plasticity Induced by Low-Frequency Stimulation in Basal Dendrites of Rat Barrel Cortex Layer 5 Pyramidal Neurons.

    Science.gov (United States)

    Díez-García, Andrea; Barros-Zulaica, Natali; Núñez, Ángel; Buño, Washington; Fernández de Sevilla, David

    2017-01-01

    According to Hebb's original hypothesis (Hebb, 1949), synapses are reinforced when presynaptic activity triggers postsynaptic firing, resulting in long-term potentiation (LTP) of synaptic efficacy. Long-term depression (LTD) is a use-dependent decrease in synaptic strength that is thought to be due to synaptic input causing a weak postsynaptic effect. Although the mechanisms that mediate long-term synaptic plasticity have been investigated for at least three decades not all question have as yet been answered. Therefore, we aimed at determining the mechanisms that generate LTP or LTD with the simplest possible protocol. Low-frequency stimulation of basal dendrite inputs in Layer 5 pyramidal neurons of the rat barrel cortex induces LTP. This stimulation triggered an EPSP, an action potential (AP) burst, and a Ca 2+ spike. The same stimulation induced LTD following manipulations that reduced the Ca 2+ spike and Ca 2+ signal or the AP burst. Low-frequency whisker deflections induced similar bidirectional plasticity of action potential evoked responses in anesthetized rats. These results suggest that both in vitro and in vivo similar mechanisms regulate the balance between LTP and LTD. This simple induction form of bidirectional hebbian plasticity could be present in the natural conditions to regulate the detection, flow, and storage of sensorimotor information.

  16. Bidirectional Hebbian Plasticity Induced by Low-Frequency Stimulation in Basal Dendrites of Rat Barrel Cortex Layer 5 Pyramidal Neurons

    Science.gov (United States)

    Díez-García, Andrea; Barros-Zulaica, Natali; Núñez, Ángel; Buño, Washington; Fernández de Sevilla, David

    2017-01-01

    According to Hebb's original hypothesis (Hebb, 1949), synapses are reinforced when presynaptic activity triggers postsynaptic firing, resulting in long-term potentiation (LTP) of synaptic efficacy. Long-term depression (LTD) is a use-dependent decrease in synaptic strength that is thought to be due to synaptic input causing a weak postsynaptic effect. Although the mechanisms that mediate long-term synaptic plasticity have been investigated for at least three decades not all question have as yet been answered. Therefore, we aimed at determining the mechanisms that generate LTP or LTD with the simplest possible protocol. Low-frequency stimulation of basal dendrite inputs in Layer 5 pyramidal neurons of the rat barrel cortex induces LTP. This stimulation triggered an EPSP, an action potential (AP) burst, and a Ca2+ spike. The same stimulation induced LTD following manipulations that reduced the Ca2+ spike and Ca2+ signal or the AP burst. Low-frequency whisker deflections induced similar bidirectional plasticity of action potential evoked responses in anesthetized rats. These results suggest that both in vitro and in vivo similar mechanisms regulate the balance between LTP and LTD. This simple induction form of bidirectional hebbian plasticity could be present in the natural conditions to regulate the detection, flow, and storage of sensorimotor information. PMID:28203145

  17. Exploring the Virtuality Continuum for Complex Rule-Set Education in the Context of Soccer Rule Comprehension

    Directory of Open Access Journals (Sweden)

    Andrés N. Vargas González

    2017-11-01

    Full Text Available We present an exploratory study to assess the benefits of using Augmented Reality (AR in training sports rule comprehension. Soccer is the chosen context for this study due to the wide range of complexity in the rules and regulations. Observers must understand and holistically evaluate the proximity of players in the game to the ball and other visual objects, such as the goal, penalty area, and other players. Grounded in previous literature investigating the effects of Virtual Reality (VR scenarios on transfer of training (ToT, we explore how three different interfaces influence user perception using both qualitative and quantitative measures. To better understand how effective augmented reality technology is when combined with learning systems, we compare results on the effects of learning outcomes in three interface conditions: AR, VR and a traditional Desktop interface. We also compare these interfaces as measured by user experience, engagement, and immersion. Results show that there were no significance difference among VR and AR; however, these participants outperformed the Desktop group which needed a higher number of adaptations to acquire the same knowledge.

  18. Group learning versus local learning: Which is prefer for public cooperation?

    Science.gov (United States)

    Yang, Shi-Han; Song, Qi-Qing

    2018-01-01

    We study the evolution of cooperation in public goods games on various graphs, focusing on the effects that are brought by different kinds of strategy donors. This highlights a basic feature of a public good game, for which there exists a remarkable difference between the interactive players and the players who are imitated. A player can learn from all the groups where the player is a member or from the typically local nearest neighbors, and the results show that the group learning rules have better performance in promoting cooperation on many networks than the local learning rules. The heterogeneity of networks' degree may be an effective mechanism for harvesting the cooperation expectation in many cases, however, we find that heterogeneity does not definitely mean the high frequency of cooperators in a population under group learning rules. It was shown that cooperators always hardly evolve whenever the interaction and the replacement do not coincide for evolutionary pairwise dilemmas on graphs, while for PG games we find that breaking the symmetry is conducive to the survival of cooperators.

  19. Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array

    Directory of Open Access Journals (Sweden)

    Sukru Burc Eryilmaz

    2014-07-01

    Full Text Available Recent advances in neuroscience together with nanoscale electronic device technology have resulted in huge interests in realizing brain-like computing hardwares using emerging nanoscale memory devices as synaptic elements. Although there has been experimental work that demonstrated the operation of nanoscale synaptic element at the single device level, network level studies have been limited to simulations. In this work, we demonstrate, using experiments, array level associative learning using phase change synaptic devices connected in a grid like configuration similar to the organization of the biological brain. Implementing Hebbian learning with phase change memory cells, the synaptic grid was able to store presented patterns and recall missing patterns in an associative brain-like fashion. We found that the system is robust to device variations, and large variations in cell resistance states can be accommodated by increasing the number of training epochs. We illustrated the tradeoff between variation tolerance of the network and the overall energy consumption, and found that energy consumption is decreased significantly for lower variation tolerance.

  20. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  1. Decision rule classifiers for multi-label decision tables

    KAUST Repository

    Alsolami, Fawaz; Azad, Mohammad; Chikalov, Igor; Moshkov, Mikhail

    2014-01-01

    for decision tables from UCI Machine Learning Repository and KEEL Repository show that rule heuristics taking into account both coverage and uncertainty perform better than the strategies taking into account a single criterion. © 2014 Springer International

  2. Diversity of Rule-based Approaches: Classic Systems and Recent Applications

    Directory of Open Access Journals (Sweden)

    Grzegorz J. Nalepa

    2016-11-01

    Full Text Available Rules are a common symbolic model of knowledge. Rule-based systems share roots in cognitive science and artificial intelligence. In the former, they are mostly used in cognitive architectures; in the latter, they are developed in several domains including knowledge engineering and machine learning. This paper aims to give an overview of these issues with the focus on the current research perspective of artificial intelligence. Moreover, in this setting we discuss our results in the design of rule-based systems and their applications in context-aware and business intelligence systems.

  3. A Hybrid Genetic Programming Algorithm for Automated Design of Dispatching Rules.

    Science.gov (United States)

    Nguyen, Su; Mei, Yi; Xue, Bing; Zhang, Mengjie

    2018-06-04

    Designing effective dispatching rules for production systems is a difficult and timeconsuming task if it is done manually. In the last decade, the growth of computing power, advanced machine learning, and optimisation techniques has made the automated design of dispatching rules possible and automatically discovered rules are competitive or outperform existing rules developed by researchers. Genetic programming is one of the most popular approaches to discovering dispatching rules in the literature, especially for complex production systems. However, the large heuristic search space may restrict genetic programming from finding near optimal dispatching rules. This paper develops a new hybrid genetic programming algorithm for dynamic job shop scheduling based on a new representation, a new local search heuristic, and efficient fitness evaluators. Experiments show that the new method is effective regarding the quality of evolved rules. Moreover, evolved rules are also significantly smaller and contain more relevant attributes.

  4. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  5. Burst-induced anti-Hebbian depression acts through short-term synaptic dynamics to cancel redundant sensory signals.

    Science.gov (United States)

    Harvey-Girard, Erik; Lewis, John; Maler, Leonard

    2010-04-28

    Weakly electric fish can enhance the detection and localization of important signals such as those of prey in part by cancellation of redundant spatially diffuse electric signals due to, e.g., their tail bending. The cancellation mechanism is based on descending input, conveyed by parallel fibers emanating from cerebellar granule cells, that produces a negative image of the global low-frequency signals in pyramidal cells within the first-order electrosensory region, the electrosensory lateral line lobe (ELL). Here we demonstrate that the parallel fiber synaptic input to ELL pyramidal cell undergoes long-term depression (LTD) whenever both parallel fiber afferents and their target cells are stimulated to produce paired burst discharges. Paired large bursts (4-4) induce robust LTD over pre-post delays of up to +/-50 ms, whereas smaller bursts (2-2) induce weaker LTD. Single spikes (either presynaptic or postsynaptic) paired with bursts did not induce LTD. Tetanic presynaptic stimulation was also ineffective in inducing LTD. Thus, we have demonstrated a form of anti-Hebbian LTD that depends on the temporal correlation of burst discharge. We then demonstrated that the burst-induced LTD is postsynaptic and requires the NR2B subunit of the NMDA receptor, elevation of postsynaptic Ca(2+), and activation of CaMKIIbeta. A model incorporating local inhibitory circuitry and previously identified short-term presynaptic potentiation of the parallel fiber synapses further suggests that the combination of burst-induced LTD, presynaptic potentiation, and local inhibition may be sufficient to explain the generation of the negative image and cancellation of redundant sensory input by ELL pyramidal cells.

  6. Retrieving Knowledge in Social Situations: A Test of the Implicit Rules Model.

    Science.gov (United States)

    Meyer, Janet R.

    1996-01-01

    Supports the Implicit Rules Model, which suggests that individuals acquire implicit rules that connect request situation schemas to behaviors. Shows how individuals, in two experiments, learned, based on feedback, which behaviors were "correct" for multiple instances, and then, on their own, chose the correct behavior for new instances.…

  7. Looking for exceptions on knowledge rules induced from HIV cleavage data set

    Directory of Open Access Journals (Sweden)

    Ronaldo Cristiano Prati

    2004-01-01

    Full Text Available The aim of data mining is to find useful knowledge inout of databases. In order to extract such knowledge, several methods can be used, among them machine learning (ML algorithms. In this work we focus on ML algorithms that express the extracted knowledge in a symbolic form, such as rules. This representation may allow us to ''explain'' the data. Rule learning algorithms are mainly designed to induce classification rules that can predict new cases with high accuracy. However, these sorts of rules generally express common sense knowledge, resulting in many interesting and useful rules not being discovered. Furthermore, the domain independent biases, especially those related to the language used to express the induced knowledge, could induce rules that are difficult to understand. Exceptions might be used in order to overcome these drawbacks. Exceptions are defined as rules that contradict common believebeliefs. This kind of rules can play an important role in the process of understanding the underlying data as well as in making critical decisions. By contradicting the user's common beliefves, exceptions are bound to be interesting. This work proposes a method to find exceptions. In order to illustrate the potential of our approach, we apply the method in a real world data set to discover rules and exceptions in the HIV virus protein cleavage process. A good understanding of the process that generates this data plays an important role oin the research of cleavage inhibitors. We consider believe that the proposed approach may help the domain expert to further understand this process.

  8. Grounded understanding of abstract concepts: The case of STEM learning.

    Science.gov (United States)

    Hayes, Justin C; Kraemer, David J M

    2017-01-01

    Characterizing the neural implementation of abstract conceptual representations has long been a contentious topic in cognitive science. At the heart of the debate is whether the "sensorimotor" machinery of the brain plays a central role in representing concepts, or whether the involvement of these perceptual and motor regions is merely peripheral or epiphenomenal. The domain of science, technology, engineering, and mathematics (STEM) learning provides an important proving ground for sensorimotor (or grounded) theories of cognition, as concepts in science and engineering courses are often taught through laboratory-based and other hands-on methodologies. In this review of the literature, we examine evidence suggesting that sensorimotor processes strengthen learning associated with the abstract concepts central to STEM pedagogy. After considering how contemporary theories have defined abstraction in the context of semantic knowledge, we propose our own explanation for how body-centered information, as computed in sensorimotor brain regions and visuomotor association cortex, can form a useful foundation upon which to build an understanding of abstract scientific concepts, such as mechanical force. Drawing from theories in cognitive neuroscience, we then explore models elucidating the neural mechanisms involved in grounding intangible concepts, including Hebbian learning, predictive coding, and neuronal recycling. Empirical data on STEM learning through hands-on instruction are considered in light of these neural models. We conclude the review by proposing three distinct ways in which the field of cognitive neuroscience can contribute to STEM learning by bolstering our understanding of how the brain instantiates abstract concepts in an embodied fashion.

  9. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.

    Science.gov (United States)

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

  10. Can power spectrum observations rule out slow-roll inflation?

    OpenAIRE

    Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony

    2017-01-01

    The spectral index of scalar perturbations is an important observable that allows us to learn about inflationary physics. In particular, a detection of a significant deviation from a constant spectral index could enable us to rule out the simplest class of inflation models. We investigate whether future observations could rule out canonical single-field slow-roll inflation given the parameters allowed by current observational constraints. We find that future measurements of a constant running...

  11. Identifying Human Phenotype Terms by Combining Machine Learning and Validation Rules

    Directory of Open Access Journals (Sweden)

    Manuel Lobo

    2017-01-01

    Full Text Available Named-Entity Recognition is commonly used to identify biological entities such as proteins, genes, and chemical compounds found in scientific articles. The Human Phenotype Ontology (HPO is an ontology that provides a standardized vocabulary for phenotypic abnormalities found in human diseases. This article presents the Identifying Human Phenotypes (IHP system, tuned to recognize HPO entities in unstructured text. IHP uses Stanford CoreNLP for text processing and applies Conditional Random Fields trained with a rich feature set, which includes linguistic, orthographic, morphologic, lexical, and context features created for the machine learning-based classifier. However, the main novelty of IHP is its validation step based on a set of carefully crafted manual rules, such as the negative connotation analysis, that combined with a dictionary can filter incorrectly identified entities, find missed entities, and combine adjacent entities. The performance of IHP was evaluated using the recently published HPO Gold Standardized Corpora (GSC, where the system Bio-LarK CR obtained the best F-measure of 0.56. IHP achieved an F-measure of 0.65 on the GSC. Due to inconsistencies found in the GSC, an extended version of the GSC was created, adding 881 entities and modifying 4 entities. IHP achieved an F-measure of 0.863 on the new GSC.

  12. A Numerical Comparison of Rule Ensemble Methods and Support Vector Machines

    Energy Technology Data Exchange (ETDEWEB)

    Meza, Juan C.; Woods, Mark

    2009-12-18

    Machine or statistical learning is a growing field that encompasses many scientific problems including estimating parameters from data, identifying risk factors in health studies, image recognition, and finding clusters within datasets, to name just a few examples. Statistical learning can be described as 'learning from data' , with the goal of making a prediction of some outcome of interest. This prediction is usually made on the basis of a computer model that is built using data where the outcomes and a set of features have been previously matched. The computer model is called a learner, hence the name machine learning. In this paper, we present two such algorithms, a support vector machine method and a rule ensemble method. We compared their predictive power on three supernova type 1a data sets provided by the Nearby Supernova Factory and found that while both methods give accuracies of approximately 95%, the rule ensemble method gives much lower false negative rates.

  13. A BCM theory of meta-plasticity for online self-reorganizing fuzzy-associative learning.

    Science.gov (United States)

    Tan, Javan; Quek, Chai

    2010-06-01

    Self-organizing neurofuzzy approaches have matured in their online learning of fuzzy-associative structures under time-invariant conditions. To maximize their operative value for online reasoning, these self-sustaining mechanisms must also be able to reorganize fuzzy-associative knowledge in real-time dynamic environments. Hence, it is critical to recognize that they would require self-reorganizational skills to rebuild fluid associative structures when their existing organizations fail to respond well to changing circumstances. In this light, while Hebbian theory (Hebb, 1949) is the basic computational framework for associative learning, it is less attractive for time-variant online learning because it suffers from stability limitations that impedes unlearning. Instead, this paper adopts the Bienenstock-Cooper-Munro (BCM) theory of neurological learning via meta-plasticity principles (Bienenstock et al., 1982) that provides for both online associative and dissociative learning. For almost three decades, BCM theory has been shown to effectively brace physiological evidence of synaptic potentiation (association) and depression (dissociation) into a sound mathematical framework for computational learning. This paper proposes an interpretation of the BCM theory of meta-plasticity for an online self-reorganizing fuzzy-associative learning system to realize online-reasoning capabilities. Experimental findings are twofold: 1) the analysis using S&P-500 stock index illustrated that the self-reorganizing approach could follow the trajectory shifts in the time-variant S&P-500 index for about 60 years, and 2) the benchmark profiles showed that the fuzzy-associative approach yielded comparable results with other fuzzy-precision models with similar online objectives.

  14. Relationships between length and coverage of decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2014-01-01

    The paper describes a new tool for study relationships between length and coverage of exact decision rules. This tool is based on dynamic programming approach. We also present results of experiments with decision tables from UCI Machine Learning Repository.

  15. Relationships between length and coverage of decision rules

    KAUST Repository

    Amin, Talha

    2014-02-14

    The paper describes a new tool for study relationships between length and coverage of exact decision rules. This tool is based on dynamic programming approach. We also present results of experiments with decision tables from UCI Machine Learning Repository.

  16. Automatic Learning of Fine Operating Rules for Online Power System Security Control.

    Science.gov (United States)

    Sun, Hongbin; Zhao, Feng; Wang, Hao; Wang, Kang; Jiang, Weiyong; Guo, Qinglai; Zhang, Boming; Wehenkel, Louis

    2016-08-01

    Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.

  17. Integrating the behavioral and neural dynamics of response selection in a dual-task paradigm: a dynamic neural field model of Dux et al. (2009).

    Science.gov (United States)

    Buss, Aaron T; Wifall, Tim; Hazeltine, Eliot; Spencer, John P

    2014-02-01

    People are typically slower when executing two tasks than when only performing a single task. These dual-task costs are initially robust but are reduced with practice. Dux et al. (2009) explored the neural basis of dual-task costs and learning using fMRI. Inferior frontal junction (IFJ) showed a larger hemodynamic response on dual-task trials compared with single-task trial early in learning. As dual-task costs were eliminated, dual-task hemodynamics in IFJ reduced to single-task levels. Dux and colleagues concluded that the reduction of dual-task costs is accomplished through increased efficiency of information processing in IFJ. We present a dynamic field theory of response selection that addresses two questions regarding these results. First, what mechanism leads to the reduction of dual-task costs and associated changes in hemodynamics? We show that a simple Hebbian learning mechanism is able to capture the quantitative details of learning at both the behavioral and neural levels. Second, is efficiency isolated to cognitive control areas such as IFJ, or is it also evident in sensory motor areas? To investigate this, we restrict Hebbian learning to different parts of the neural model. None of the restricted learning models showed the same reductions in dual-task costs as the unrestricted learning model, suggesting that efficiency is distributed across cognitive control and sensory motor processing systems.

  18. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha

    2012-10-04

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  19. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  20. Complexity, Training Paradigm Design, and the Contribution of Memory Subsystems to Grammar Learning

    Science.gov (United States)

    Ettlinger, Marc; Wong, Patrick C. M.

    2016-01-01

    Although there is variability in nonnative grammar learning outcomes, the contributions of training paradigm design and memory subsystems are not well understood. To examine this, we presented learners with an artificial grammar that formed words via simple and complex morphophonological rules. Across three experiments, we manipulated training paradigm design and measured subjects' declarative, procedural, and working memory subsystems. Experiment 1 demonstrated that passive, exposure-based training boosted learning of both simple and complex grammatical rules, relative to no training. Additionally, procedural memory correlated with simple rule learning, whereas declarative memory correlated with complex rule learning. Experiment 2 showed that presenting corrective feedback during the test phase did not improve learning. Experiment 3 revealed that structuring the order of training so that subjects are first exposed to the simple rule and then the complex improved learning. The cumulative findings shed light on the contributions of grammatical complexity, training paradigm design, and domain-general memory subsystems in determining grammar learning success. PMID:27391085

  1. Complexity, Training Paradigm Design, and the Contribution of Memory Subsystems to Grammar Learning.

    Science.gov (United States)

    Antoniou, Mark; Ettlinger, Marc; Wong, Patrick C M

    2016-01-01

    Although there is variability in nonnative grammar learning outcomes, the contributions of training paradigm design and memory subsystems are not well understood. To examine this, we presented learners with an artificial grammar that formed words via simple and complex morphophonological rules. Across three experiments, we manipulated training paradigm design and measured subjects' declarative, procedural, and working memory subsystems. Experiment 1 demonstrated that passive, exposure-based training boosted learning of both simple and complex grammatical rules, relative to no training. Additionally, procedural memory correlated with simple rule learning, whereas declarative memory correlated with complex rule learning. Experiment 2 showed that presenting corrective feedback during the test phase did not improve learning. Experiment 3 revealed that structuring the order of training so that subjects are first exposed to the simple rule and then the complex improved learning. The cumulative findings shed light on the contributions of grammatical complexity, training paradigm design, and domain-general memory subsystems in determining grammar learning success.

  2. Complexity, Training Paradigm Design, and the Contribution of Memory Subsystems to Grammar Learning.

    Directory of Open Access Journals (Sweden)

    Mark Antoniou

    Full Text Available Although there is variability in nonnative grammar learning outcomes, the contributions of training paradigm design and memory subsystems are not well understood. To examine this, we presented learners with an artificial grammar that formed words via simple and complex morphophonological rules. Across three experiments, we manipulated training paradigm design and measured subjects' declarative, procedural, and working memory subsystems. Experiment 1 demonstrated that passive, exposure-based training boosted learning of both simple and complex grammatical rules, relative to no training. Additionally, procedural memory correlated with simple rule learning, whereas declarative memory correlated with complex rule learning. Experiment 2 showed that presenting corrective feedback during the test phase did not improve learning. Experiment 3 revealed that structuring the order of training so that subjects are first exposed to the simple rule and then the complex improved learning. The cumulative findings shed light on the contributions of grammatical complexity, training paradigm design, and domain-general memory subsystems in determining grammar learning success.

  3. The Role of Age and Executive Function in Auditory Category Learning

    Science.gov (United States)

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  4. Evolution of cooperation driven by incremental learning

    Science.gov (United States)

    Li, Pei; Duan, Haibin

    2015-02-01

    It has been shown that the details of microscopic rules in structured populations can have a crucial impact on the ultimate outcome in evolutionary games. So alternative formulations of strategies and their revision processes exploring how strategies are actually adopted and spread within the interaction network need to be studied. In the present work, we formulate the strategy update rule as an incremental learning process, wherein knowledge is refreshed according to one's own experience learned from the past (self-learning) and that gained from social interaction (social-learning). More precisely, we propose a continuous version of strategy update rules, by introducing the willingness to cooperate W, to better capture the flexibility of decision making behavior. Importantly, the newly gained knowledge including self-learning and social learning is weighted by the parameter ω, establishing a strategy update rule involving innovative element. Moreover, we quantify the macroscopic features of the emerging patterns to inspect the underlying mechanisms of the evolutionary process using six cluster characteristics. In order to further support our results, we examine the time evolution course for these characteristics. Our results might provide insights for understanding cooperative behaviors and have several important implications for understanding how individuals adjust their strategies under real-life conditions.

  5. Decision Rules, Trees and Tests for Tables with Many-valued Decisions–comparative Study

    KAUST Repository

    Azad, Mohammad; Zielosko, Beata; Moshkov, Mikhail; Chikalov, Igor

    2013-01-01

    In this paper, we present three approaches for construction of decision rules for decision tables with many-valued decisions. We construct decision rules directly for rows of decision table, based on paths in decision tree, and based on attributes contained in a test (super-reduct). Experimental results for the data sets taken from UCI Machine Learning Repository, contain comparison of the maximum and the average length of rules for the mentioned approaches.

  6. Decision Rules, Trees and Tests for Tables with Many-valued Decisions–comparative Study

    KAUST Repository

    Azad, Mohammad

    2013-10-04

    In this paper, we present three approaches for construction of decision rules for decision tables with many-valued decisions. We construct decision rules directly for rows of decision table, based on paths in decision tree, and based on attributes contained in a test (super-reduct). Experimental results for the data sets taken from UCI Machine Learning Repository, contain comparison of the maximum and the average length of rules for the mentioned approaches.

  7. Knowledge discovery with classification rules in a cardiovascular dataset.

    Science.gov (United States)

    Podgorelec, Vili; Kokol, Peter; Stiglic, Milojka Molan; Hericko, Marjan; Rozman, Ivan

    2005-12-01

    In this paper we study an evolutionary machine learning approach to data mining and knowledge discovery based on the induction of classification rules. A method for automatic rules induction called AREX using evolutionary induction of decision trees and automatic programming is introduced. The proposed algorithm is applied to a cardiovascular dataset consisting of different groups of attributes which should possibly reveal the presence of some specific cardiovascular problems in young patients. A case study is presented that shows the use of AREX for the classification of patients and for discovering possible new medical knowledge from the dataset. The defined knowledge discovery loop comprises a medical expert's assessment of induced rules to drive the evolution of rule sets towards more appropriate solutions. The final result is the discovery of a possible new medical knowledge in the field of pediatric cardiology.

  8. Assessing predation risk: optimal behaviour and rules of thumb.

    Science.gov (United States)

    Welton, Nicky J; McNamara, John M; Houston, Alasdair I

    2003-12-01

    We look at a simple model in which an animal makes behavioural decisions over time in an environment in which all parameters are known to the animal except predation risk. In the model there is a trade-off between gaining information about predation risk and anti-predator behaviour. All predator attacks lead to death for the prey, so that the prey learns about predation risk by virtue of the fact that it is still alive. We show that it is not usually optimal to behave as if the current unbiased estimate of the predation risk is its true value. We consider two different ways to model reproduction; in the first scenario the animal reproduces throughout its life until it dies, and in the second scenario expected reproductive success depends on the level of energy reserves the animal has gained by some point in time. For both of these scenarios we find results on the form of the optimal strategy and give numerical examples which compare optimal behaviour with behaviour under simple rules of thumb. The numerical examples suggest that the value of the optimal strategy over the rules of thumb is greatest when there is little current information about predation risk, learning is not too costly in terms of predation, and it is energetically advantageous to learn about predation. We find that for the model and parameters investigated, a very simple rule of thumb such as 'use the best constant control' performs well.

  9. Design and Analysis of Decision Rules via Dynamic Programming

    KAUST Repository

    Amin, Talha M.

    2017-04-24

    The areas of machine learning, data mining, and knowledge representation have many different formats used to represent information. Decision rules, amongst these formats, are the most expressive and easily-understood by humans. In this thesis, we use dynamic programming to design decision rules and analyze them. The use of dynamic programming allows us to work with decision rules in ways that were previously only possible for brute force methods. Our algorithms allow us to describe the set of all rules for a given decision table. Further, we can perform multi-stage optimization by repeatedly reducing this set to only contain rules that are optimal with respect to selected criteria. One way that we apply this study is to generate small systems with short rules by simulating a greedy algorithm for the set cover problem. We also compare maximum path lengths (depth) of deterministic and non-deterministic decision trees (a non-deterministic decision tree is effectively a complete system of decision rules) with regards to Boolean functions. Another area of advancement is the presentation of algorithms for constructing Pareto optimal points for rules and rule systems. This allows us to study the existence of “totally optimal” decision rules (rules that are simultaneously optimal with regards to multiple criteria). We also utilize Pareto optimal points to compare and rate greedy heuristics with regards to two criteria at once. Another application of Pareto optimal points is the study of trade-offs between cost and uncertainty which allows us to find reasonable systems of decision rules that strike a balance between length and accuracy.

  10. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.

    Science.gov (United States)

    Burbank, Kendra S

    2015-12-01

    The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.

  11. Learning CAD at University through Summaries of the Rules of Design Intent

    Science.gov (United States)

    Barbero, Basilio Ramos; Pedrosa, Carlos Melgosa; Samperio, Raúl Zamora

    2017-01-01

    The ease with which 3D CAD models may be modified and reused are two key aspects that improve the design-intent variable and that can significantly shorten the development timelines of a product. A set of rules are gathered from various authors that take different 3D modelling strategies into account. These rules are then applied to CAD…

  12. A Collaborative Educational Association Rule Mining Tool

    Science.gov (United States)

    Garcia, Enrique; Romero, Cristobal; Ventura, Sebastian; de Castro, Carlos

    2011-01-01

    This paper describes a collaborative educational data mining tool based on association rule mining for the ongoing improvement of e-learning courses and allowing teachers with similar course profiles to share and score the discovered information. The mining tool is oriented to be used by non-expert instructors in data mining so its internal…

  13. Optimization of Approximate Inhibitory Rules Relative to Number of Misclassifications

    KAUST Repository

    Alsolami, Fawaz

    2013-10-04

    In this work, we consider so-called nonredundant inhibitory rules, containing an expression “attribute:F value” on the right- hand side, for which the number of misclassifications is at most a threshold γ. We study a dynamic programming approach for description of the considered set of rules. This approach allows also the optimization of nonredundant inhibitory rules relative to the length and coverage. The aim of this paper is to investigate an additional possibility of optimization relative to the number of misclassifications. The results of experiments with decision tables from the UCI Machine Learning Repository show this additional optimization achieves a fewer misclassifications. Thus, the proposed optimization procedure is promising.

  14. Anisotropic interaction rules in circular motions of pigeon flocks: An empirical study based on sparse Bayesian learning

    Science.gov (United States)

    Chen, Duxin; Xu, Bowen; Zhu, Tao; Zhou, Tao; Zhang, Hai-Tao

    2017-08-01

    Coordination shall be deemed to the result of interindividual interaction among natural gregarious animal groups. However, revealing the underlying interaction rules and decision-making strategies governing highly coordinated motion in bird flocks is still a long-standing challenge. Based on analysis of high spatial-temporal resolution GPS data of three pigeon flocks, we extract the hidden interaction principle by using a newly emerging machine learning method, namely the sparse Bayesian learning. It is observed that the interaction probability has an inflection point at pairwise distance of 3-4 m closer than the average maximum interindividual distance, after which it decays strictly with rising pairwise metric distances. Significantly, the density of spatial neighbor distribution is strongly anisotropic, with an evident lack of interactions along individual velocity. Thus, it is found that in small-sized bird flocks, individuals reciprocally cooperate with a variational number of neighbors in metric space and tend to interact with closer time-varying neighbors, rather than interacting with a fixed number of topological ones. Finally, extensive numerical investigation is conducted to verify both the revealed interaction and decision-making principle during circular flights of pigeon flocks.

  15. Algorithm for detecting violations of traffic rules based on computer vision approaches

    Directory of Open Access Journals (Sweden)

    Ibadov Samir

    2017-01-01

    Full Text Available We propose a new algorithm for automatic detect violations of traffic rules for improving the people safety on the unregulated pedestrian crossing. The algorithm uses multi-step proceedings. They are zebra detection, cars detection, and pedestrian detection. For car detection, we use faster R-CNN deep learning tool. The algorithm shows promising results in the detection violations of traffic rules.

  16. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  17. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha

    2013-02-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.

  18. Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks.

    Science.gov (United States)

    Gardner, Brian; Sporea, Ioana; Grüning, André

    2015-12-01

    Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we examine how multilayered networks of spiking neurons can learn to encode for input patterns using a fully temporal coding scheme. To this end, we introduce a new supervised learning rule, MultilayerSpiker, that can train spiking networks containing hidden layer neurons to perform transformations between spatiotemporal input and output spike patterns. The performance of the proposed learning rule is demonstrated in terms of the number of pattern mappings it can learn, the complexity of network structures it can be used on, and its classification accuracy when using multispike-based encodings. In particular, the learning rule displays robustness against input noise and can generalize well on an example data set. Our approach contributes to both a systematic understanding of how computations might take place in the nervous system and a learning rule that displays strong technical capability.

  19. Embedding responses in spontaneous neural activity shaped through sequential learning.

    Directory of Open Access Journals (Sweden)

    Tomoki Kurikawa

    Full Text Available Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, "memories-as-bifurcations," that differs from the traditional "memories-as-attractors" viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in

  20. Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns.

    Directory of Open Access Journals (Sweden)

    Qiang Yu

    Full Text Available A new learning rule (Precise-Spike-Driven (PSD Synaptic Plasticity is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.

  1. Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns.

    Science.gov (United States)

    Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou

    2013-01-01

    A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.

  2. The Role of Feedback Contingency in Perceptual Category Learning

    Science.gov (United States)

    Ashby, F. Gregory; Vucovich, Lauren E.

    2016-01-01

    Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from two or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all four conditions and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects are discussed, as well as their theoretical implications. PMID:27149393

  3. Recurrent-neural-network-based Boolean factor analysis and its application to word clustering.

    Science.gov (United States)

    Frolov, Alexander A; Husek, Dusan; Polyakov, Pavel Yu

    2009-07-01

    The objective of this paper is to introduce a neural-network-based algorithm for word clustering as an extension of the neural-network-based Boolean factor analysis algorithm (Frolov , 2007). It is shown that this extended algorithm supports even the more complex model of signals that are supposed to be related to textual documents. It is hypothesized that every topic in textual data is characterized by a set of words which coherently appear in documents dedicated to a given topic. The appearance of each word in a document is coded by the activity of a particular neuron. In accordance with the Hebbian learning rule implemented in the network, sets of coherently appearing words (treated as factors) create tightly connected groups of neurons, hence, revealing them as attractors of the network dynamics. The found factors are eliminated from the network memory by the Hebbian unlearning rule facilitating the search of other factors. Topics related to the found sets of words can be identified based on the words' semantics. To make the method complete, a special technique based on a Bayesian procedure has been developed for the following purposes: first, to provide a complete description of factors in terms of component probability, and second, to enhance the accuracy of classification of signals to determine whether it contains the factor. Since it is assumed that every word may possibly contribute to several topics, the proposed method might be related to the method of fuzzy clustering. In this paper, we show that the results of Boolean factor analysis and fuzzy clustering are not contradictory, but complementary. To demonstrate the capabilities of this attempt, the method is applied to two types of textual data on neural networks in two different languages. The obtained topics and corresponding words are at a good level of agreement despite the fact that identical topics in Russian and English conferences contain different sets of keywords.

  4. A multiplicative reinforcement learning model capturing learning dynamics and interindividual variability in mice

    OpenAIRE

    Bathellier, Brice; Tee, Sui Poh; Hrovat, Christina; Rumpel, Simon

    2013-01-01

    Learning speed can strongly differ across individuals. This is seen in humans and animals. Here, we measured learning speed in mice performing a discrimination task and developed a theoretical model based on the reinforcement learning framework to account for differences between individual mice. We found that, when using a multiplicative learning rule, the starting connectivity values of the model strongly determine the shape of learning curves. This is in contrast to current learning models ...

  5. A "Sweet 16" of Rules About Teamwork

    Science.gov (United States)

    Laufer, Alexander (Editor)

    2002-01-01

    The following "Sweet 16" rules included in this paper derive from a longer paper by APPL Director Dr. Edward Hoffman and myself entitled " 99 Rules for Managing Faster, Better, Cheaper Projects." Our sources consisted mainly of "war stories" told by master project managers in my book Simultaneous Management: Managing Projects in a Dynamic Environment (AMACOM, The American Management Association, 1996). The Simultaneous Management model was a result of 10 years of intensive research and testing conducted with the active participation of master project managers from leading private organizations such as AT&T, DuPont, Exxon, General Motors, IBM, Motorola and Procter & Gamble. In a more recent study, led by Dr. Hoffman, we learned that master project managers in leading public organizations employ most of these rules as well. Both studies, in private and public organizations, found that a dynamic environment calls for dynamic management, and that is especially clear in how successful project managers think about their teams.

  6. Modeling the learning of the English past tense with memory-based learning

    NARCIS (Netherlands)

    van Noord, Rik; Spenader, Jennifer K.

    2015-01-01

    Modeling the acquisition and final state of English past tense inflection has been an ongoing challenge since the mid-eighties. A number of rule-based and connectionist models have been proposed over the years, but the former usually have no explanation of how the rules are learned and the latter

  7. What can we learn from sum rules for vertex functions in QCD

    International Nuclear Information System (INIS)

    Craigie, N.S.; Stern, J.

    1982-04-01

    We demonstrate that the light cone sum rules for vertex functions based on the operator product expansion and QCD perturbation theory lead to interesting relationships between various non-perturbative parameters associated with hadronic bound states (e.g. vertex couplings and decay constants). We also show that such sum rules provide a valuable means of estimating the matrix elements of the higher spin operators in the meson wave function. (author)

  8. A Cross-Correlated Delay Shift Supervised Learning Method for Spiking Neurons with Application to Interictal Spike Detection in Epilepsy.

    Science.gov (United States)

    Guo, Lilin; Wang, Zhenzhong; Cabrerizo, Mercedes; Adjouadi, Malek

    2017-05-01

    This study introduces a novel learning algorithm for spiking neurons, called CCDS, which is able to learn and reproduce arbitrary spike patterns in a supervised fashion allowing the processing of spatiotemporal information encoded in the precise timing of spikes. Unlike the Remote Supervised Method (ReSuMe), synapse delays and axonal delays in CCDS are variants which are modulated together with weights during learning. The CCDS rule is both biologically plausible and computationally efficient. The properties of this learning rule are investigated extensively through experimental evaluations in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. Results presented show that the CCDS learning method achieves learning accuracy and learning speed comparable with ReSuMe, but improves classification accuracy when compared to both the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. The merit of CCDS rule is further validated on a practical example involving the automated detection of interictal spikes in EEG records of patients with epilepsy. Results again show that with proper encoding, the CCDS rule achieves good recognition performance.

  9. Multimedia Football Viewing: Embedded Rules, Practice, and Video Context in IVD Procedural Learning.

    Science.gov (United States)

    Kim, Eunsoon; Young, Michael F.

    This study investigated the effects of interactive video (IVD) instruction with embedded rules (production system rules) and practice with feedback on learners' academic achievement and perceived self efficacy in the domain of procedural knowledge for watching professional football. Subjects were 71 female volunteers from undergraduate education…

  10. Binary Factorization in Hopfield-Like Neural Networks: Single-Step Approximation and Computer Simulations

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Sirota, A.M.; Húsek, Dušan; Muraviev, I. P.

    2004-01-01

    Roč. 14, č. 2 (2004), s. 139-152 ISSN 1210-0552 R&D Projects: GA ČR GA201/01/1192 Grant - others:BARRANDE(EU) 99010-2/99053; Intellectual computer Systems(EU) Grant 2.45 Institutional research plan: CEZ:AV0Z1030915 Keywords : nonlinear binary factor analysis * feature extraction * recurrent neural network * Single-Step approximation * neurodynamics simulation * attraction basins * Hebbian learning * unsupervised learning * neuroscience * brain function modeling Subject RIV: BA - General Mathematics

  11. Rule-based Test Generation with Mind Maps

    Directory of Open Access Journals (Sweden)

    Dimitry Polivaev

    2012-02-01

    Full Text Available This paper introduces basic concepts of rule based test generation with mind maps, and reports experiences learned from industrial application of this technique in the domain of smart card testing by Giesecke & Devrient GmbH over the last years. It describes the formalization of test selection criteria used by our test generator, our test generation architecture and test generation framework.

  12. Visual Perceptual Learning and its Specificity and Transfer: A New Perspective

    Directory of Open Access Journals (Sweden)

    Cong Yu

    2011-05-01

    Full Text Available Visual perceptual learning is known to be location and orientation specific, and is thus assumed to reflect the neuronal plasticity in the early visual cortex. However, in recent studies we created “Double training” and “TPE” procedures to demonstrate that these “fundamental” specificities of perceptual learning are in some sense artifacts and that learning can completely transfer to a new location or orientation. We proposed a rule-based learning theory to reinterpret perceptual learning and its specificity and transfer: A high-level decision unit learns the rules of performing a visual task through training. However, the learned rules cannot be applied to a new location or orientation automatically because the decision unit cannot functionally connect to new visual inputs with sufficient strength because these inputs are unattended or even suppressed during training. It is double training and TPE training that reactivate these new inputs, so that the functional connections can be strengthened to enable rule application and learning transfer. Currently we are investigating the properties of perceptual learning free from the bogus specificities, and the results provide some preliminary but very interesting insights into how training reshapes the functional connections between the high-level decision units and sensory inputs in the brain.

  13. Collaboration rules.

    Science.gov (United States)

    Evans, Philip; Wolf, Bob

    2005-01-01

    Corporate leaders seeking to boost growth, learning, and innovation may find the answer in a surprising place: the Linux open-source software community. Linux is developed by an essentially volunteer, self-organizing community of thousands of programmers. Most leaders would sell their grandmothers for workforces that collaborate as efficiently, frictionlessly, and creatively as the self-styled Linux hackers. But Linux is software, and software is hardly a model for mainstream business. The authors have, nonetheless, found surprising parallels between the anarchistic, caffeinated, hirsute world of Linux hackers and the disciplined, tea-sipping, clean-cut world of Toyota engineering. Specifically, Toyota and Linux operate by rules that blend the self-organizing advantages of markets with the low transaction costs of hierarchies. In place of markets' cash and contracts and hierarchies' authority are rules about how individuals and groups work together (with rigorous discipline); how they communicate (widely and with granularity); and how leaders guide them toward a common goal (through example). Those rules, augmented by simple communication technologies and a lack of legal barriers to sharing information, create rich common knowledge, the ability to organize teams modularly, extraordinary motivation, and high levels of trust, which radically lowers transaction costs. Low transaction costs, in turn, make it profitable for organizations to perform more and smaller transactions--and so increase the pace and flexibility typical of high-performance organizations. Once the system achieves critical mass, it feeds on itself. The larger the system, the more broadly shared the knowledge, language, and work style. The greater individuals' reputational capital, the louder the applause and the stronger the motivation. The success of Linux is evidence of the power of that virtuous circle. Toyota's success is evidence that it is also powerful in conventional companies.

  14. Optimization of approximate decision rules relative to number of misclassifications

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    In the paper, we study an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the number of misclassifications. We introduce an uncertainty measure J(T) which is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. The presented algorithm constructs a directed acyclic graph Δγ(T). Based on this graph we can describe the whole set of so-called irredundant γ-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 The authors and IOS Press. All rights reserved.

  15. Optimization of approximate decision rules relative to number of misclassifications

    KAUST Repository

    Amin, Talha

    2012-12-01

    In the paper, we study an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the number of misclassifications. We introduce an uncertainty measure J(T) which is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. The presented algorithm constructs a directed acyclic graph Δγ(T). Based on this graph we can describe the whole set of so-called irredundant γ-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 The authors and IOS Press. All rights reserved.

  16. From rule to response: neuronal processes in the premotor and prefrontal cortex.

    Science.gov (United States)

    Wallis, Jonathan D; Miller, Earl K

    2003-09-01

    The ability to use abstract rules or principles allows behavior to generalize from specific circumstances (e.g., rules learned in a specific restaurant can subsequently be applied to any dining experience). Neurons in the prefrontal cortex (PFC) encode such rules. However, to guide behavior, rules must be linked to motor responses. We investigated the neuronal mechanisms underlying this process by recording from the PFC and the premotor cortex (PMC) of monkeys trained to use two abstract rules: "same" or "different." The monkeys had to either hold or release a lever, depending on whether two successively presented pictures were the same or different, and depending on which rule was in effect. The abstract rules were represented in both regions, although they were more prevalent and were encoded earlier and more strongly in the PMC. There was a perceptual bias in the PFC, relative to the PMC, with more PFC neurons encoding the presented pictures. In contrast, neurons encoding the behavioral response were more prevalent in the PMC, and the selectivity was stronger and appeared earlier in the PMC than in the PFC.

  17. Input and Age-Dependent Variation in Second Language Learning: A Connectionist Account.

    Science.gov (United States)

    Janciauskas, Marius; Chang, Franklin

    2017-07-26

    Language learning requires linguistic input, but several studies have found that knowledge of second language (L2) rules does not seem to improve with more language exposure (e.g., Johnson & Newport, 1989). One reason for this is that previous studies did not factor out variation due to the different rules tested. To examine this issue, we reanalyzed grammaticality judgment scores in Flege, Yeni-Komshian, and Liu's (1999) study of L2 learners using rule-related predictors and found that, in addition to the overall drop in performance due to a sensitive period, L2 knowledge increased with years of input. Knowledge of different grammar rules was negatively associated with input frequency of those rules. To better understand these effects, we modeled the results using a connectionist model that was trained using Korean as a first language (L1) and then English as an L2. To explain the sensitive period in L2 learning, the model's learning rate was reduced in an age-related manner. By assigning different learning rates for syntax and lexical learning, we were able to model the difference between early and late L2 learners in input sensitivity. The model's learning mechanism allowed transfer between the L1 and L2, and this helped to explain the differences between different rules in the grammaticality judgment task. This work demonstrates that an L1 model of learning and processing can be adapted to provide an explicit account of how the input and the sensitive period interact in L2 learning. © 2017 The Authors. Cognitive Science - A Multidisciplinary Journal published by Wiley Periodicals, Inc.

  18. Machine learning with quantum relative entropy

    International Nuclear Information System (INIS)

    Tsuda, Koji

    2009-01-01

    Density matrices are a central tool in quantum physics, but it is also used in machine learning. A positive definite matrix called kernel matrix is used to represent the similarities between examples. Positive definiteness assures that the examples are embedded in an Euclidean space. When a positive definite matrix is learned from data, one has to design an update rule that maintains the positive definiteness. Our update rule, called matrix exponentiated gradient update, is motivated by the quantum relative entropy. Notably, the relative entropy is an instance of Bregman divergences, which are asymmetric distance measures specifying theoretical properties of machine learning algorithms. Using the calculus commonly used in quantum physics, we prove an upperbound of the generalization error of online learning.

  19. Machine learning with quantum relative entropy

    Energy Technology Data Exchange (ETDEWEB)

    Tsuda, Koji [Max Planck Institute for Biological Cybernetics, Spemannstr. 38, Tuebingen, 72076 (Germany)], E-mail: koji.tsuda@tuebingen.mpg.de

    2009-12-01

    Density matrices are a central tool in quantum physics, but it is also used in machine learning. A positive definite matrix called kernel matrix is used to represent the similarities between examples. Positive definiteness assures that the examples are embedded in an Euclidean space. When a positive definite matrix is learned from data, one has to design an update rule that maintains the positive definiteness. Our update rule, called matrix exponentiated gradient update, is motivated by the quantum relative entropy. Notably, the relative entropy is an instance of Bregman divergences, which are asymmetric distance measures specifying theoretical properties of machine learning algorithms. Using the calculus commonly used in quantum physics, we prove an upperbound of the generalization error of online learning.

  20. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    Science.gov (United States)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  1. Research on Fault Diagnosis Method Based on Rule Base Neural Network

    Directory of Open Access Journals (Sweden)

    Zheng Ni

    2017-01-01

    Full Text Available The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rule base is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rule base is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method.

  2. Memory formation in reversal learning of the honeybee

    Directory of Open Access Journals (Sweden)

    Ravit Hadar

    2010-12-01

    Full Text Available In reversal learning animals are first trained with a differential learning protocol, where they learn to respond to a reinforced odor (CS+ and not to respond to a nonreinforced odor (CS-. Once they respond correctly to this rule, the contingencies of the conditioned stimuli are reversed, and animals learn to adjust their response to the new rule. This study investigated the effect of a protein synthesis inhibitor (emetine on the memory formed after reversal learning in the honeybee Apis mellifera. Two groups of bees were studied: summer bees and winter bees, each yielded different results. Blocking protein synthesis in summer bees inhibits consolidation of the excitatory learning following reversal learning whereas it blocked the consolidation of the inhibitory learning in winter bees. These findings suggest that excitatory and inhibitory learning may involve different molecular processes in bees, which are seasonally dependent.

  3. Inductive learning of thyroid functional states using the ID3 algorithm. The effect of poor examples on the learning result.

    Science.gov (United States)

    Forsström, J

    1992-01-01

    The ID3 algorithm for inductive learning was tested using preclassified material for patients suspected to have a thyroid illness. Classification followed a rule-based expert system for the diagnosis of thyroid function. Thus, the knowledge to be learned was limited to the rules existing in the knowledge base of that expert system. The learning capability of the ID3 algorithm was tested with an unselected learning material (with some inherent missing data) and with a selected learning material (no missing data). The selected learning material was a subgroup which formed a part of the unselected learning material. When the number of learning cases was increased, the accuracy of the program improved. When the learning material was large enough, an increase in the learning material did not improve the results further. A better learning result was achieved with the selected learning material not including missing data as compared to unselected learning material. With this material we demonstrate a weakness in the ID3 algorithm: it can not find available information from good example cases if we add poor examples to the data.

  4. Business rules for creating process flexibility : Mapping RIF rules and BDI rules

    NARCIS (Netherlands)

    Gong, Y.; Overbeek, S.J.; Janssen, M.

    2011-01-01

    Business rules and software agents can be used for creating flexible business processes. The Rule Interchange Format (RIF) is a new W3C recommendation standard for exchanging rules among disparate systems. Yet, the impact that the introduction of RIF has on the design of flexible business processes

  5. Teaching versus enforcing game rules in preschoolers' peer interactions.

    Science.gov (United States)

    Köymen, Bahar; Schmidt, Marco F H; Rost, Loreen; Lieven, Elena; Tomasello, Michael

    2015-07-01

    Children use normative language in two key contexts: when teaching others and when enforcing social norms. We presented pairs of 3- and 5-year-old peers (N=192) with a sorting game in two experimental conditions (in addition to a third baseline condition). In the teaching condition, one child was knowledgeable, whereas the other child was ignorant and so in need of instruction. In the enforcement condition, children learned conflicting rules so that each child was making mistakes from the other's point of view. When teaching rules to an ignorant partner, both age groups used generic normative language ("Bunnies go here"). When enforcing rules on a rule-breaking partner, 3-year-olds used normative utterances that were not generic and aimed at correcting individual behavior ("No, this goes there"), whereas 5-year-olds again used generic normative language, perhaps because they discerned that instruction was needed in this case as well. Young children normatively correct peers differently depending on their assessment of what their wayward partners need to bring them back into line. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Rule-Based Event Processing and Reaction Rules

    Science.gov (United States)

    Paschke, Adrian; Kozlenkov, Alexander

    Reaction rules and event processing technologies play a key role in making business and IT / Internet infrastructures more agile and active. While event processing is concerned with detecting events from large event clouds or streams in almost real-time, reaction rules are concerned with the invocation of actions in response to events and actionable situations. They state the conditions under which actions must be taken. In the last decades various reaction rule and event processing approaches have been developed, which for the most part have been advanced separately. In this paper we survey reaction rule approaches and rule-based event processing systems and languages.

  7. Learning in the machine: The symmetries of the deep learning channel.

    Science.gov (United States)

    Baldi, Pierre; Sadowski, Peter; Lu, Zhiqin

    2017-11-01

    In a physical neural system, learning rules must be local both in space and time. In order for learning to occur, non-local information must be communicated to the deep synapses through a communication channel, the deep learning channel. We identify several possible architectures for this learning channel (Bidirectional, Conjoined, Twin, Distinct) and six symmetry challenges: (1) symmetry of architectures; (2) symmetry of weights; (3) symmetry of neurons; (4) symmetry of derivatives; (5) symmetry of processing; and (6) symmetry of learning rules. Random backpropagation (RBP) addresses the second and third symmetry, and some of its variations, such as skipped RBP (SRBP) address the first and the fourth symmetry. Here we address the last two desirable symmetries showing through simulations that they can be achieved and that the learning channel is particularly robust to symmetry variations. Specifically, random backpropagation and its variations can be performed with the same non-linear neurons used in the main input-output forward channel, and the connections in the learning channel can be adapted using the same algorithm used in the forward channel, removing the need for any specialized hardware in the learning channel. Finally, we provide mathematical results in simple cases showing that the learning equations in the forward and backward channels converge to fixed points, for almost any initial conditions. In symmetric architectures, if the weights in both channels are small at initialization, adaptation in both channels leads to weights that are essentially symmetric during and after learning. Biological connections are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Debate as Encapsulated Conflict: Ruled Controversy as an Approach to Learning Conflict Management Skills.

    Science.gov (United States)

    Lee, David G.; Hensley, Carl Wayne

    Debate can provide a format for the development of communication skills to aid students in managing conflicts, because an understanding of rule-governed communication in conflict situations is invaluable in constructive conflict management. Since in debate procedural rules restrict discussion primarily to substantive and procedural topics, debate…

  9. Neuronal avalanches and learning

    Energy Technology Data Exchange (ETDEWEB)

    Arcangelis, Lucilla de, E-mail: dearcangelis@na.infn.it [Department of Information Engineering and CNISM, Second University of Naples, 81031 Aversa (Italy)

    2011-05-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  10. Neuronal avalanches and learning

    International Nuclear Information System (INIS)

    Arcangelis, Lucilla de

    2011-01-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  11. Fronto-parietal contributions to phonological processes in successful artificial grammar learning

    Directory of Open Access Journals (Sweden)

    Dariya Goranskaya

    2016-11-01

    Full Text Available Sensitivity to regularities plays a crucial role in the acquisition of various linguistic features from spoken language input. Artificial grammar (AG learning paradigms explore pattern recognition abilities in a set of structured sequences (i.e. of syllables or letters. In the present study, we investigated the functional underpinnings of learning phonological regularities in auditorily presented syllable sequences. While previous neuroimaging studies either focused on functional differences between the processing of correct vs. incorrect sequences or between different levels of sequence complexity, here the focus is on the neural foundation of the actual learning success. During functional magnetic resonance imaging (fMRI, participants were exposed to a set of syllable sequences with an underlying phonological rule system, known to ensure performance differences between participants. We expected that successful learning and rule application would require phonological segmentation and phoneme comparison. As an outcome of four alternating learning and test fMRI sessions, participants split into successful learners and non-learners. Relative to non-learners, successful learners showed increased task-related activity in a fronto-parietal network of brain areas encompassing the left lateral premotor cortex as well as bilateral superior and inferior parietal cortices during both learning and rule application. These areas were previously associated with phonological segmentation, phoneme comparison and verbal working memory. Based on these activity patterns and the phonological strategies for rule acquisition and application, we argue that successful learning and processing of complex phonological rules in our paradigm is mediated via a fronto-parietal network for phonological processes.

  12. Mechanisms of rule acquisition and rule following in inductive reasoning.

    Science.gov (United States)

    Crescentini, Cristiano; Seyed-Allaei, Shima; De Pisapia, Nicola; Jovicich, Jorge; Amati, Daniele; Shallice, Tim

    2011-05-25

    Despite the recent interest in the neuroanatomy of inductive reasoning processes, the regional specificity within prefrontal cortex (PFC) for the different mechanisms involved in induction tasks remains to be determined. In this study, we used fMRI to investigate the contribution of PFC regions to rule acquisition (rule search and rule discovery) and rule following. Twenty-six healthy young adult participants were presented with a series of images of cards, each consisting of a set of circles numbered in sequence with one colored blue. Participants had to predict the position of the blue circle on the next card. The rules that had to be acquired pertained to the relationship among succeeding stimuli. Responses given by subjects were categorized in a series of phases either tapping rule acquisition (responses given up to and including rule discovery) or rule following (correct responses after rule acquisition). Mid-dorsolateral PFC (mid-DLPFC) was active during rule search and remained active until successful rule acquisition. By contrast, rule following was associated with activation in temporal, motor, and medial/anterior prefrontal cortex. Moreover, frontopolar cortex (FPC) was active throughout the rule acquisition and rule following phases before a rule became familiar. We attributed activation in mid-DLPFC to hypothesis generation and in FPC to integration of multiple separate inferences. The present study provides evidence that brain activation during inductive reasoning involves a complex network of frontal processes and that different subregions respond during rule acquisition and rule following phases.

  13. Learning Display Rules: The Socialization of Emotion Expression in Infancy.

    Science.gov (United States)

    Malatesta, Carol Zander; Haviland, Jeannette M.

    1982-01-01

    Develops a methodology for studying emotion socialization and examines the synchrony of mother and infant expressions to determine whether "instruction" in display rules is underway in early infancy and what the short-term effects of such instruction on infant expression might be. Sixty dyads were videotaped during play and reunion after brief…

  14. Phonological reduplication in sign language: rules rule

    Directory of Open Access Journals (Sweden)

    Iris eBerent

    2014-06-01

    Full Text Available Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL. As a case study, we examine reduplication (X→XX—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating, and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task. The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.

  15. Adaptive structured dictionary learning for image fusion based on group-sparse-representation

    Science.gov (United States)

    Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei

    2018-04-01

    Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.

  16. 18 CFR 385.104 - Rule of construction (Rule 104).

    Science.gov (United States)

    2010-04-01

    ... Definitions § 385.104 Rule of construction (Rule 104). To the extent that the text of a rule is inconsistent with its caption, the text of the rule controls. [Order 376, 49 FR 21705, May 23, 1984] ...

  17. Transfer between local and global processing levels by pigeons (Columba livia) and humans (Homo sapiens) in exemplar- and rule-based categorization tasks.

    Science.gov (United States)

    Aust, Ulrike; Braunöder, Elisabeth

    2015-02-01

    The present experiment investigated pigeons' and humans' processing styles-local or global-in an exemplar-based visual categorization task in which category membership of every stimulus had to be learned individually, and in a rule-based task in which category membership was defined by a perceptual rule. Group Intact was trained with the original pictures (providing both intact local and global information), Group Scrambled was trained with scrambled versions of the same pictures (impairing global information), and Group Blurred was trained with blurred versions (impairing local information). Subsequently, all subjects were tested for transfer to the 2 untrained presentation modes. Humans outperformed pigeons regarding learning speed and accuracy as well as transfer performance and showed good learning irrespective of group assignment, whereas the pigeons of Group Blurred needed longer to learn the training tasks than the pigeons of Groups Intact and Scrambled. Also, whereas humans generalized equally well to any novel presentation mode, pigeons' transfer from and to blurred stimuli was impaired. Both species showed faster learning and, for the most part, better transfer in the rule-based than in the exemplar-based task, but there was no evidence of the used processing mode depending on the type of task (exemplar- or rule-based). Whereas pigeons relied on local information throughout, humans did not show a preference for either processing level. Additional tests with grayscale versions of the training stimuli, with versions that were both blurred and scrambled, and with novel instances of the rule-based task confirmed and further extended these findings. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  18. Knowledge Representation and Reasoning in Personalized Web-Based e-Learning Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2006-01-01

    a user inferred from user interactions with the eLeanrning systems is used to adapt o®ered learning resources and guide a learner through them. This keynote gives an overview about knowledge and rules taken into account in current adaptive eLearning prototypes when adapting learning instructions....... Adaptation is usually based on knowledge about learning esources and users. Rules are used for heuristics to match the learning resources with learners and infer adaptation decisions.......Adaptation that is so natural for teaching by humans is a challenging issue for electronic learning tools. Adaptation in classic teaching is based on observations made about students during teaching. Similar idea was employed in user-adapted (personalized) eLearning applications. Knowledge about...

  19. FeynRules - Feynman rules made easy

    OpenAIRE

    Christensen, Neil D.; Duhr, Claude

    2008-01-01

    In this paper we present FeynRules, a new Mathematica package that facilitates the implementation of new particle physics models. After the user implements the basic model information (e.g. particle content, parameters and Lagrangian), FeynRules derives the Feynman rules and stores them in a generic form suitable for translation to any Feynman diagram calculation program. The model can then be translated to the format specific to a particular Feynman diagram calculator via F...

  20. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  1. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  2. Geometrical methods in learning theory

    International Nuclear Information System (INIS)

    Burdet, G.; Combe, Ph.; Nencka, H.

    2001-01-01

    The methods of information theory provide natural approaches to learning algorithms in the case of stochastic formal neural networks. Most of the classical techniques are based on some extremization principle. A geometrical interpretation of the associated algorithms provides a powerful tool for understanding the learning process and its stability and offers a framework for discussing possible new learning rules. An illustration is given using sequential and parallel learning in the Boltzmann machine

  3. Optimization of decision rules based on dynamic programming approach

    KAUST Repository

    Zielosko, Beata

    2014-01-14

    This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δ γ T which nodes are subtables of the decision table T given by pairs "attribute = value". The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2014 Springer International Publishing Switzerland.

  4. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.

    Science.gov (United States)

    Neftci, Emre O; Augustine, Charles; Paul, Somnath; Detorakis, Georgios

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

  5. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines

    Directory of Open Access Journals (Sweden)

    Emre O. Neftci

    2017-06-01

    Full Text Available An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

  6. Morvan's syndrome and the sustained absence of all sleep rhythms for months or years: An hypothesis.

    Science.gov (United States)

    Touzet, Claude

    2016-09-01

    Despite the predation costs, sleep is ubiquitous in the animal realm. Humans spend a third of their life sleeping, and the quality of sleep has been related to co-morbidity, Alzheimer disease, etc. Excessive wakefulness induces rapid changes in cognitive performances, and it is claimed that one could die of sleep deprivation as quickly as by absence of water. In this context, the fact that a few people are able to go without sleep for months, even years, without displaying any cognitive troubles requires explanations. Theories ascribing sleep to memory consolidation are unable to explain such observations. It is not the case of the theory of sleep as the hebbian reinforcement of the inhibitory synapses (ToS-HRIS). Hebbian learning (Long Term Depression - LTD) guarantees that an efficient inhibitory synapse will lose its efficiency just because it is efficient at avoiding the activation of the post-synaptic neuron. This erosion of the inhibition is replenished by hebbian learning (Long Term Potentiation - LTP) when pre and post-synaptic neurons are active together - which is exactly what happens with the travelling depolarization waves of the slow-wave sleep (SWS). The best documented cases of months-long insomnia are reports of patients with Morvan's syndrome. This syndrome has an autoimmune cause that impedes - among many things - the potassium channels of the post-synaptic neurons, increasing LTP and decreasing LTD. We hypothesize that the absence of inhibitory efficiency erosion during wakefulness (thanks to a decrease of inhibitory LTD) is the cause for an absence of slow-wave sleep (SWS), which results also in the absence of REM sleep. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Neuromorphic Deep Learning Machines

    OpenAIRE

    Neftci, E; Augustine, C; Paul, S; Detorakis, G

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Back Propagation (BP) rule, often relies on the immediate availability of network-wide...

  8. Use of a Recursive-Rule eXtraction algorithm with J48graft to achieve highly accurate and concise rule extraction from a large breast cancer dataset

    Directory of Open Access Journals (Sweden)

    Yoichi Hayashi

    Full Text Available To assist physicians in the diagnosis of breast cancer and thereby improve survival, a highly accurate computer-aided diagnostic system is necessary. Although various machine learning and data mining approaches have been devised to increase diagnostic accuracy, most current methods are inadequate. The recently developed Recursive-Rule eXtraction (Re-RX algorithm provides a hierarchical, recursive consideration of discrete variables prior to analysis of continuous data, and can generate classification rules that have been trained on the basis of both discrete and continuous attributes. The objective of this study was to extract highly accurate, concise, and interpretable classification rules for diagnosis using the Re-RX algorithm with J48graft, a class for generating a grafted C4.5 decision tree. We used the Wisconsin Breast Cancer Dataset (WBCD. Nine research groups provided 10 kinds of highly accurate concrete classification rules for the WBCD. We compared the accuracy and characteristics of the rule set for the WBCD generated using the Re-RX algorithm with J48graft with five rule sets obtained using 10-fold cross validation (CV. We trained the WBCD using the Re-RX algorithm with J48graft and the average classification accuracies of 10 runs of 10-fold CV for the training and test datasets, the number of extracted rules, and the average number of antecedents for the WBCD. Compared with other rule extraction algorithms, the Re-RX algorithm with J48graft resulted in a lower average number of rules for diagnosing breast cancer, which is a substantial advantage. It also provided the lowest average number of antecedents per rule. These features are expected to greatly aid physicians in making accurate and concise diagnoses for patients with breast cancer. Keywords: Breast cancer diagnosis, Rule extraction, Re-RX algorithm, J48graft, C4.5

  9. An agent architecture with on-line learning of both procedural and declarative knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Sun, R.; Peterson, T.; Merrill, E. [Univ. of Alabama, Tuscaloosa, AL (United States)

    1996-12-31

    In order to develop versatile cognitive agents that learn in situated contexts and generalize resulting knowledge to different environments, we explore the possibility of learning both declarative and procedural knowledge in a hybrid connectionist architecture. The architecture is based on the two-level idea proposed earlier by the author. Declarative knowledge is represented symbolically, while procedural knowledge is represented subsymbolically. The architecture integrates reactive procedures, rules, learning, and decision-making in a unified framework, and structures different learning components (including Q-learning and rule induction) in a synergistic way to perform on-line and integrated learning.

  10. Brain signatures of early lexical and morphological learning of a new language.

    Science.gov (United States)

    Havas, Viktória; Laine, Matti; Rodríguez Fornells, Antoni

    2017-07-01

    Morphology is an important part of language processing but little is known about how adult second language learners acquire morphological rules. Using a word-picture associative learning task, we have previously shown that a brief exposure to novel words with embedded morphological structure (suffix for natural gender) is enough for language learners to acquire the hidden morphological rule. Here we used this paradigm to study the brain signatures of early morphological learning in a novel language in adults. Behavioural measures indicated successful lexical (word stem) and morphological (gender suffix) learning. A day after the learning phase, event-related brain potentials registered during a recognition memory task revealed enhanced N400 and P600 components for stem and suffix violations, respectively. An additional effect observed with combined suffix and stem violations was an enhancement of an early N2 component, most probably related to conflict-detection processes. Successful morphological learning was also evident in the ERP responses to the subsequent rule-generalization task with new stems, where violation of the morphological rule was associated with an early (250-400ms) and late positivity (750-900ms). Overall, these findings tend to converge with lexical and morphosyntactic violation effects observed in L1 processing, suggesting that even after a short exposure, adult language learners can acquire both novel words and novel morphological rules. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. New Safety rules

    CERN Multimedia

    Safety Commission

    2008-01-01

    The revision of CERN Safety rules is in progress and the following new Safety rules have been issued on 15-04-2008: Safety Procedure SP-R1 Establishing, Updating and Publishing CERN Safety rules: http://cern.ch/safety-rules/SP-R1.htm; Safety Regulation SR-S Smoking at CERN: http://cern.ch/safety-rules/SR-S.htm; Safety Regulation SR-M Mechanical Equipment: http://cern.ch/safety-rules/SR-M.htm; General Safety Instruction GSI-M1 Standard Lifting Equipment: http://cern.ch/safety-rules/GSI-M1.htm; General Safety Instruction GSI-M2 Standard Pressure Equipment: http://cern.ch/safety-rules/GSI-M2.htm; General Safety Instruction GSI-M3 Special Mechanical Equipment: http://cern.ch/safety-rules/GSI-M3.htm. These documents apply to all persons under the Director General’s authority. All Safety rules are available at the web page: http://www.cern.ch/safety-rules The Safety Commission

  12. The drift diffusion model as the choice rule in reinforcement learning.

    Science.gov (United States)

    Pedersen, Mads Lund; Frank, Michael J; Biele, Guido

    2017-08-01

    Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyperactivity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups.

  13. Rule-Governed Imitative Verbal Behavior as a Function of Modeling Procedures

    Science.gov (United States)

    Clinton, LeRoy; Boyce, Kathleen D.

    1975-01-01

    Investigated the effectiveness of modeling procedures alone and complemented by the appropriate rule statement on the production of plurals. Subjects were 20 normal and 20 retarded children who were randomly assigned to one of two learning conditions and who received either affective or informative social reinforcement. (Author/SDH)

  14. Online Pedagogical Tutorial Tactics Optimization Using Genetic-Based Reinforcement Learning.

    Science.gov (United States)

    Lin, Hsuan-Ta; Lee, Po-Ming; Hsiao, Tzu-Chien

    2015-01-01

    Tutorial tactics are policies for an Intelligent Tutoring System (ITS) to decide the next action when there are multiple actions available. Recent research has demonstrated that when the learning contents were controlled so as to be the same, different tutorial tactics would make difference in students' learning gains. However, the Reinforcement Learning (RL) techniques that were used in previous studies to induce tutorial tactics are insufficient when encountering large problems and hence were used in offline manners. Therefore, we introduced a Genetic-Based Reinforcement Learning (GBML) approach to induce tutorial tactics in an online-learning manner without basing on any preexisting dataset. The introduced method can learn a set of rules from the environment in a manner similar to RL. It includes a genetic-based optimizer for rule discovery task by generating new rules from the old ones. This increases the scalability of a RL learner for larger problems. The results support our hypothesis about the capability of the GBML method to induce tutorial tactics. This suggests that the GBML method should be favorable in developing real-world ITS applications in the domain of tutorial tactics induction.

  15. Concurrent TMS to the primary motor cortex augments slow motor learning

    Science.gov (United States)

    Narayana, Shalini; Zhang, Wei; Rogers, William; Strickland, Casey; Franklin, Crystal; Lancaster, Jack L.; Fox, Peter T.

    2013-01-01

    Transcranial magnetic stimulation (TMS) has shown promise as a treatment tool, with one FDA approved use. While TMS alone is able to up- (or down-) regulate a targeted neural system, we argue that TMS applied as an adjuvant is more effective for repetitive physical, behavioral and cognitive therapies, that is, therapies which are designed to alter the network properties of neural systems through Hebbian learning. We tested this hypothesis in the context of a slow motor learning paradigm. Healthy right-handed individuals were assigned to receive 5 Hz TMS (TMS group) or sham TMS (sham group) to the right primary motor cortex (M1) as they performed daily motor practice of a digit sequence task with their non-dominant hand for 4 weeks. Resting cerebral blood flow (CBF) was measured by H215O PET at baseline and after 4 weeks of practice. Sequence performance was measured daily as the number of correct sequences performed, and modeled using a hyperbolic function. Sequence performance increased significantly at 4 weeks relative to baseline in both groups. The TMS group had a significant additional improvement in performance, specifically, in the rate of skill acquisition. In both groups, an improvement in sequence timing and transfer of skills to non-trained motor domains was also found. Compared to the sham group, the TMS group demonstrated increases in resting CBF specifically in regions known to mediate skill learning namely, the M1, cingulate cortex, putamen, hippocampus, and cerebellum. These results indicate that TMS applied concomitantly augments behavioral effects of motor practice, with corresponding neural plasticity in motor sequence learning network. These findings are the first demonstration of the behavioral and neural enhancing effects of TMS on slow motor practice and have direct application in neurorehabilitation where TMS could be applied in conjunction with physical therapy. PMID:23867557

  16. Game-Theoretic Learning in Distributed Control

    KAUST Repository

    Marden, Jason R.

    2018-01-05

    In distributed architecture control problems, there is a collection of interconnected decision-making components that seek to realize desirable collective behaviors through local interactions and by processing local information. Applications range from autonomous vehicles to energy to transportation. One approach to control of such distributed architectures is to view the components as players in a game. In this approach, two design considerations are the components’ incentives and the rules that dictate how components react to the decisions of other components. In game-theoretic language, the incentives are defined through utility functions, and the reaction rules are online learning dynamics. This chapter presents an overview of this approach, covering basic concepts in game theory, special game classes, measures of distributed efficiency, utility design, and online learning rules, all with the interpretation of using game theory as a prescriptive paradigm for distributed control design.

  17. Structure identification in fuzzy inference using reinforcement learning

    Science.gov (United States)

    Berenji, Hamid R.; Khedkar, Pratap

    1993-01-01

    In our previous work on the GARIC architecture, we have shown that the system can start with surface structure of the knowledge base (i.e., the linguistic expression of the rules) and learn the deep structure (i.e., the fuzzy membership functions of the labels used in the rules) by using reinforcement learning. Assuming the surface structure, GARIC refines the fuzzy membership functions used in the consequents of the rules using a gradient descent procedure. This hybrid fuzzy logic and reinforcement learning approach can learn to balance a cart-pole system and to backup a truck to its docking location after a few trials. In this paper, we discuss how to do structure identification using reinforcement learning in fuzzy inference systems. This involves identifying both surface as well as deep structure of the knowledge base. The term set of fuzzy linguistic labels used in describing the values of each control variable must be derived. In this process, splitting a label refers to creating new labels which are more granular than the original label and merging two labels creates a more general label. Splitting and merging of labels directly transform the structure of the action selection network used in GARIC by increasing or decreasing the number of hidden layer nodes.

  18. Rules and routines in organizations and the management of safety rules

    Energy Technology Data Exchange (ETDEWEB)

    Weichbrodt, J. Ch.

    2013-07-01

    This thesis is concerned with the relationship between rules and routines in organizations and how the former can be used to steer the latter. Rules are understood as formal organizational artifacts, whereas organizational routines are collective patterns of action. While research on routines has been thriving, a clear understanding of how rules can be used to influence or control organizational routines (and vice-versa) is still lacking. This question is of particular relevance to safety rules in high-risk organizations, where the way in which organizational routines unfold can ultimately be a matter of life and death. In these organizations, an important and related issue is the balancing of standardization and flexibility – which, in the case of rules, takes the form of finding the right degree of formalization. In high-risk organizations, the question is how to adequately regulate actors’ routines in order to facilitate safe behavior, while at the same time leaving enough leeway for actors to make good decisions in abnormal situations. The railroads are regarded as high-risk industries and also rely heavily on formal rules. In this thesis, the Swiss Federal Railways (SBB) were therefore selected for a field study on rules and routines. The issues outlined so far are being tackled theoretically (paper 1), empirically (paper 2), and from a practitioner’s (i.e., rule maker’s) point of view (paper 3). In paper 1, the relationship between rules and routines is theoretically conceptualized, based on a literature review. Literature on organizational control and coordination, on rules in human factors and safety, and on organizational routines is combined. Three distinct roles (rule maker, rule supervisor, and rule follower) are outlined. Six propositions are developed regarding the necessary characteristics of both routines and rules, the respective influence of the three roles on the rule-routine relationship, and regarding organizational aspects such as

  19. Rules and routines in organizations and the management of safety rules

    International Nuclear Information System (INIS)

    Weichbrodt, J. Ch.

    2013-01-01

    This thesis is concerned with the relationship between rules and routines in organizations and how the former can be used to steer the latter. Rules are understood as formal organizational artifacts, whereas organizational routines are collective patterns of action. While research on routines has been thriving, a clear understanding of how rules can be used to influence or control organizational routines (and vice-versa) is still lacking. This question is of particular relevance to safety rules in high-risk organizations, where the way in which organizational routines unfold can ultimately be a matter of life and death. In these organizations, an important and related issue is the balancing of standardization and flexibility – which, in the case of rules, takes the form of finding the right degree of formalization. In high-risk organizations, the question is how to adequately regulate actors’ routines in order to facilitate safe behavior, while at the same time leaving enough leeway for actors to make good decisions in abnormal situations. The railroads are regarded as high-risk industries and also rely heavily on formal rules. In this thesis, the Swiss Federal Railways (SBB) were therefore selected for a field study on rules and routines. The issues outlined so far are being tackled theoretically (paper 1), empirically (paper 2), and from a practitioner’s (i.e., rule maker’s) point of view (paper 3). In paper 1, the relationship between rules and routines is theoretically conceptualized, based on a literature review. Literature on organizational control and coordination, on rules in human factors and safety, and on organizational routines is combined. Three distinct roles (rule maker, rule supervisor, and rule follower) are outlined. Six propositions are developed regarding the necessary characteristics of both routines and rules, the respective influence of the three roles on the rule-routine relationship, and regarding organizational aspects such as

  20. System diagnostic builder: a rule-generation tool for expert systems that do intelligent data evaluation

    Science.gov (United States)

    Nieten, Joseph L.; Burke, Roger

    1993-03-01

    The system diagnostic builder (SDB) is an automated knowledge acquisition tool using state- of-the-art artificial intelligence (AI) technologies. The SDB uses an inductive machine learning technique to generate rules from data sets that are classified by a subject matter expert (SME). Thus, data is captured from the subject system, classified by an expert, and used to drive the rule generation process. These rule-bases are used to represent the observable behavior of the subject system, and to represent knowledge about this system. The rule-bases can be used in any knowledge based system which monitors or controls a physical system or simulation. The SDB has demonstrated the utility of using inductive machine learning technology to generate reliable knowledge bases. In fact, we have discovered that the knowledge captured by the SDB can be used in any number of applications. For example, the knowledge bases captured from the SMS can be used as black box simulations by intelligent computer aided training devices. We can also use the SDB to construct knowledge bases for the process control industry, such as chemical production, or oil and gas production. These knowledge bases can be used in automated advisory systems to ensure safety, productivity, and consistency.

  1. A neural learning classifier system with self-adaptive constructivism for mobile robot control.

    Science.gov (United States)

    Hurst, Jacob; Bull, Larry

    2006-01-01

    For artificial entities to achieve true autonomy and display complex lifelike behavior, they will need to exploit appropriate adaptable learning algorithms. In this context adaptability implies flexibility guided by the environment at any given time and an open-ended ability to learn appropriate behaviors. This article examines the use of constructivism-inspired mechanisms within a neural learning classifier system architecture that exploits parameter self-adaptation as an approach to realize such behavior. The system uses a rule structure in which each rule is represented by an artificial neural network. It is shown that appropriate internal rule complexity emerges during learning at a rate controlled by the learner and that the structure indicates underlying features of the task. Results are presented in simulated mazes before moving to a mobile robot platform.

  2. Perceptual learning rules based on reinforcers and attention

    NARCIS (Netherlands)

    Roelfsema, Pieter R.; van Ooyen, Arjen; Watanabe, Takeo

    2010-01-01

    How does the brain learn those visual features that are relevant for behavior? In this article, we focus on two factors that guide plasticity of visual representations. First, reinforcers cause the global release of diffusive neuromodulatory signals that gate plasticity. Second, attentional feedback

  3. Dissociation of Category-Learning Systems via Brain Potentials

    Directory of Open Access Journals (Sweden)

    Robert G Morrison

    2015-07-01

    Full Text Available Behavioral, neuropsychological, and neuroimaging evidence has suggested that categories can often be learned via either an explicit rule-based mechanism critically dependent on medial temporal and prefrontal brain regions, or via an implicit information-integration mechanism relying on the basal ganglia. In this study, participants viewed sine-wave gratings (i.e., Gabor patches that varied on two dimensions and learned to categorize them via trial-by-trial feedback. Two different stimulus distributions were used; one was intended to encourage an explicit rule-based process and the other an implicit information-integration process. We monitored brain activity with scalp electroencephalography (EEG while each participant (1 passively observed stimuli represented of both distributions, (2 categorized stimuli from one distribution, and, one week later, (3 categorized stimuli from the other distribution. Categorization accuracy was similar for the two distributions. Subtractions of Event-Related Potentials (ERPs for correct and incorrect trials were used to identify neural differences in rule-based and information-integration categorization processes. We identified an occipital brain potential that was differentially modulated by categorization condition accuracy at an early latency (150 - 250 ms, likely reflecting the degree of holistic processing. A stimulus-locked late positive complex associated with explicit memory updating was modulated by accuracy in the rule-based, but not the information-integration task. Likewise, a feedback-locked P300 ERP associated with expectancy was correlated with performance only in the rule-based, but not the information-integration condition. These results provide additional evidence for distinct brain mechanisms supporting rule-based versus implicit information-integration category learning and use.

  4. Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules

    Science.gov (United States)

    Hassanpour, Saeed; O'Connor, Martin J.; Das, Amar K.

    Rule bases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rule bases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rule base. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rule bases, enabling their acquisition and management.

  5. Learning and transfer of category knowledge in an indirect categorization task.

    Science.gov (United States)

    Helie, Sebastien; Ashby, F Gregory

    2012-05-01

    Knowledge representations acquired during category learning experiments are 'tuned' to the task goal. A useful paradigm to study category representations is indirect category learning. In the present article, we propose a new indirect categorization task called the "same"-"different" categorization task. The same-different categorization task is a regular same-different task, but the question asked to the participants is about the stimulus category membership instead of stimulus identity. Experiment 1 explores the possibility of indirectly learning rule-based and information-integration category structures using the new paradigm. The results suggest that there is little learning about the category structures resulting from an indirect categorization task unless the categories can be separated by a one-dimensional rule. Experiment 2 explores whether a category representation learned indirectly can be used in a direct classification task (and vice versa). The results suggest that previous categorical knowledge acquired during a direct classification task can be expressed in the same-different categorization task only when the categories can be separated by a rule that is easily verbalized. Implications of these results for categorization research are discussed.

  6. Learning-parameter adjustment in neural networks

    Science.gov (United States)

    Heskes, Tom M.; Kappen, Bert

    1992-06-01

    We present a learning-parameter adjustment algorithm, valid for a large class of learning rules in neural-network literature. The algorithm follows directly from a consideration of the statistics of the weights in the network. The characteristic behavior of the algorithm is calculated, both in a fixed and a changing environment. A simple example, Widrow-Hoff learning for statistical classification, serves as an illustration.

  7. Depression-Biased Reverse Plasticity Rule Is Required for Stable Learning at Top-Down Connections

    Science.gov (United States)

    Burbank, Kendra S.; Kreiman, Gabriel

    2012-01-01

    Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body. PMID:22396630

  8. Depression-biased reverse plasticity rule is required for stable learning at top-down connections.

    Directory of Open Access Journals (Sweden)

    Kendra S Burbank

    Full Text Available Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body.

  9. Learning and inference in a nonequilibrium Ising model with hidden nodes.

    Science.gov (United States)

    Dunn, Benjamin; Roudi, Yasser

    2013-02-01

    We study inference and reconstruction of couplings in a partially observed kinetic Ising model. With hidden spins, calculating the likelihood of a sequence of observed spin configurations requires performing a trace over the configurations of the hidden ones. This, as we show, can be represented as a path integral. Using this representation, we demonstrate that systematic approximate inference and learning rules can be derived using dynamical mean-field theory. Although naive mean-field theory leads to an unstable learning rule, taking into account Gaussian corrections allows learning the couplings involving hidden nodes. It also improves learning of the couplings between the observed nodes compared to when hidden nodes are ignored.

  10. Australian road rules

    Science.gov (United States)

    2009-02-01

    *These are national-level rules. Australian Road Rules - 2009 Version, Part 18, Division 1, Rule 300 "Use of Mobile Phones" describes restrictions of mobile phone use while driving. The rule basically states that drivers cannot make or receive calls ...

  11. Surrogate-Assisted Genetic Programming With Simplified Models for Automated Design of Dispatching Rules.

    Science.gov (United States)

    Nguyen, Su; Zhang, Mengjie; Tan, Kay Chen

    2017-09-01

    Automated design of dispatching rules for production systems has been an interesting research topic over the last several years. Machine learning, especially genetic programming (GP), has been a powerful approach to dealing with this design problem. However, intensive computational requirements, accuracy and interpretability are still its limitations. This paper aims at developing a new surrogate assisted GP to help improving the quality of the evolved rules without significant computational costs. The experiments have verified the effectiveness and efficiency of the proposed algorithms as compared to those in the literature. Furthermore, new simplification and visualisation approaches have also been developed to improve the interpretability of the evolved rules. These approaches have shown great potentials and proved to be a critical part of the automated design system.

  12. Comparing Product Category Rules from Different Programs: Learned Outcomes Towards Global Alignment

    Science.gov (United States)

    Purpose Product category rules (PCRs) provide category-specific guidance for estimating and reporting product life cycle environmental impacts, typically in the form of environmental product declarations and product carbon footprints. Lack of global harmonization between PCRs or ...

  13. Rule Versus the Causality Rule in Insurance Law

    DEFF Research Database (Denmark)

    Lando, Henrik

    When the Buyer of insurance has negligently kept silent or misrepresented a (material) fact to the Seller, one of two rules will determine the extent to which cover will consequently be reduced. The pro-rata rule lowers cover in proportion to how much the Seller would have increased the premium had...... he been correctly informed; the causality rule provides either zero cover if the omitted fact has caused the insurance event, or full cover if the event would have occurred regardless of the fact. This article explores which rule is more efficient. Using the framework proposed by Picard and Dixit...... it subjects the risk averse Buyer of insurance to less variance. This implies that the pro rata rule should apply when there is significant risk for a Buyer of unintentional misrepresentation, and when the incentive to intentionally misrepresent can be curtailed through frequent verification of the Buyer...

  14. On-line learning from clustered input examples

    NARCIS (Netherlands)

    Riegler, Peter; Biehl, Michael; Solla, Sara A.; Marangi, Carmela; Marinaro, Maria; Tagliaferri, Roberto

    1996-01-01

    We analyse on-line learning of a linearly separable rule with a simple perceptron. Example inputs are taken from two overlapping clusters of data and the rule is defined through a teacher vector which is in general not aligned with the connection line of the cluster centers. We find that the Hebb

  15. Fuzzy self-learning control for magnetic servo system

    Science.gov (United States)

    Tarn, J. H.; Kuo, L. T.; Juang, K. Y.; Lin, C. E.

    1994-01-01

    It is known that an effective control system is the key condition for successful implementation of high-performance magnetic servo systems. Major issues to design such control systems are nonlinearity; unmodeled dynamics, such as secondary effects for copper resistance, stray fields, and saturation; and that disturbance rejection for the load effect reacts directly on the servo system without transmission elements. One typical approach to design control systems under these conditions is a special type of nonlinear feedback called gain scheduling. It accommodates linear regulators whose parameters are changed as a function of operating conditions in a preprogrammed way. In this paper, an on-line learning fuzzy control strategy is proposed. To inherit the wealth of linear control design, the relations between linear feedback and fuzzy logic controllers have been established. The exercise of engineering axioms of linear control design is thus transformed into tuning of appropriate fuzzy parameters. Furthermore, fuzzy logic control brings the domain of candidate control laws from linear into nonlinear, and brings new prospects into design of the local controllers. On the other hand, a self-learning scheme is utilized to automatically tune the fuzzy rule base. It is based on network learning infrastructure; statistical approximation to assign credit; animal learning method to update the reinforcement map with a fast learning rate; and temporal difference predictive scheme to optimize the control laws. Different from supervised and statistical unsupervised learning schemes, the proposed method learns on-line from past experience and information from the process and forms a rule base of an FLC system from randomly assigned initial control rules.

  16. Learning and structure of neuronal networks

    Indian Academy of Sciences (India)

    We study the effect of learning dynamics on network topology. Firstly, a network of discrete dynamical systems is considered for this purpose and the coupling strengths are made to evolve according to a temporal learning rule that is based on the paradigm of spike-time-dependent plasticity (STDP). This incorporates ...

  17. Researches on Problems in College Students'Grammar Learning and Countermeasures%Researches on Problems in College Students' Grammar Learning and Countermeasures

    Institute of Scientific and Technical Information of China (English)

    廖芳

    2016-01-01

    Grammar is the guiding rules of language, and a good mastery of grammar is the basis of English learning. This paper starts from the problems in college students' current grammar learning and put forwards some strategies to improve their English grammar.

  18. Off-line learning from clustered input examples

    NARCIS (Netherlands)

    Marangi, Carmela; Solla, Sara A.; Biehl, Michael; Riegler, Peter; Marinaro, Maria; Tagliaferri, Roberto

    1996-01-01

    We analyze the generalization ability of a simple perceptron acting on a structured input distribution for the simple case of two clusters of input data and a linearly separable rule. The generalization ability computed for three learning scenarios: maximal stability, Gibbs, and optimal learning, is

  19. Can power spectrum observations rule out slow-roll inflation?

    Science.gov (United States)

    Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony

    2018-01-01

    The spectral index of scalar perturbations is an important observable that allows us to learn about inflationary physics. In particular, a detection of a significant deviation from a constant spectral index could enable us to rule out the simplest class of inflation models. We investigate whether future observations could rule out canonical single-field slow-roll inflation given the parameters allowed by current observational constraints. We find that future measurements of a constant running (or running of the running) of the spectral index over currently available scales are unlikely to achieve this. However, there remains a large region of parameter space (especially when considering the running of the running) for falsifying the assumed class of slow-roll models if future observations accurately constrain a much wider range of scales.

  20. Use of the recursive-rule extraction algorithm with continuous attributes to improve diagnostic accuracy in thyroid disease

    Directory of Open Access Journals (Sweden)

    Yoichi Hayashi

    Full Text Available Thyroid diseases, which often lead to thyroid dysfunction involving either hypo- or hyperthyroidism, affect hundreds of millions of people worldwide, many of whom remain undiagnosed; however, diagnosis is difficult because symptoms are similar to those seen in a number of other conditions. The objective of this study was to assess the effectiveness of the Recursive-Rule Extraction (Re-RX algorithm with continuous attributes (Continuous Re-RX in extracting highly accurate, concise, and interpretable classification rules for the diagnosis of thyroid disease. We used the 7200-sample Thyroid dataset from the University of California Irvine Machine Learning Repository, a large and highly imbalanced dataset that comprises both discrete and continuous attributes. We trained the dataset using Continuous Re-RX, and after obtaining the maximum training and test accuracies, the number of extracted rules, and the average number of antecedents, we compared the results with those of other extraction methods. Our results suggested that Continuous Re-RX not only achieved the highest accuracy for diagnosing thyroid disease compared with the other methods, but also provided simple, concise, and interpretable rules. Based on these results, we believe that the use of Continuous Re-RX in machine learning may assist healthcare professionals in the diagnosis of thyroid disease. Keywords: Thyroid disease diagnosis, Re-RX algorithm, Rule extraction, Decision tree

  1. Exploring Phonetic Realization in Danish by Transformation-Based Learning

    DEFF Research Database (Denmark)

    Uneson, Marcus; Schachtenhaufen, Ruben

    2011-01-01

    We align phonemic and semi-narrow phonetic transcriptions in the DanPASS corpus and extend the phonemic description with sound classes and with traditional phonetic features. From this representation, we induce rules for phonetic realization by Transformation-Based Learning (TBL). The rules thus ...

  2. A Computational Agent Model for Hebbian Learning of Social Interaction

    NARCIS (Netherlands)

    Treur, J.

    2011-01-01

    In social interaction between two persons usually a person displays understanding of the other person. This may involve both nonverbal and verbal elements, such as bodily expressing a similar emotion and verbally expressing beliefs about the other person. Such social interaction relates to an

  3. The Use of Hebbian Cell Assemblies for Nonlinear Computation

    DEFF Research Database (Denmark)

    Tetzlaff, Christian; Dasgupta, Sakyasingha; Kulvicius, Tomas

    2015-01-01

    When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while preser...... computing complex non-linear transforms and - for execution - must cooperate with each other without interference. This mechanism, thus, permits the self-organization of computationally powerful sub-structures in dynamic networks for behavior control....

  4. Presynaptic Ionotropic Receptors Controlling and Modulating the Rules for Spike Timing-Dependent Plasticity

    Directory of Open Access Journals (Sweden)

    Matthijs B. Verhoog

    2011-01-01

    Full Text Available Throughout life, activity-dependent changes in neuronal connection strength enable the brain to refine neural circuits and learn based on experience. In line with predictions made by Hebb, synapse strength can be modified depending on the millisecond timing of action potential firing (STDP. The sign of synaptic plasticity depends on the spike order of presynaptic and postsynaptic neurons. Ionotropic neurotransmitter receptors, such as NMDA receptors and nicotinic acetylcholine receptors, are intimately involved in setting the rules for synaptic strengthening and weakening. In addition, timing rules for STDP within synapses are not fixed. They can be altered by activation of ionotropic receptors located at, or close to, synapses. Here, we will highlight studies that uncovered how network actions control and modulate timing rules for STDP by activating presynaptic ionotropic receptors. Furthermore, we will discuss how interaction between different types of ionotropic receptors may create “timing” windows during which particular timing rules lead to synaptic changes.

  5. Design of fuzzy learning control systems for steam generator water level control

    International Nuclear Information System (INIS)

    Park, Gee Yong

    1996-02-01

    A fuzzy learning algorithm is developed in order to construct the useful control rules and tune the membership functions in the fuzzy logic controller used for water level control of nuclear steam generator. The fuzzy logic controllers have shown to perform better than conventional controllers for ill-defined or complex processes such as nuclear steam generator. Whereas the fuzzy logic controller does not need a detailed mathematical model of a plant to be controlled, its structure is to be made on the basis of the operator's linguistic information experienced from the plant operations. It is not an easy work and also there is no systematic way to translate the operator's linguistic information into quantitative information. When the linguistic information of operators is incomplete, tuning the parameters of fuzzy controller is to be performed for better control performance. It is the time and effort consuming procedure that controller designer has to tune the structure of fuzzy logic controller for optimal performance. And if the number of control inputs is many and the rule base is constructed in multidimensional space, it is very difficult for a controller designer to tune the fuzzy controller structure. Hence, the difficulty in putting the experimental knowledge into quantitative (or numerical) data and the difficulty in tuning the rules are the major problems in designing fuzzy logic controller. In order to overcome the problems described above, a learning algorithm by gradient descent method is included in the fuzzy control system such that the membership functions are tuned and the necessary rules are created automatically for good control performance. For stable learning in gradient descent method, the optimal range of learning coefficient not to be trapped and not to provide too slow learning speed is investigated. With the optimal range of learning coefficient, the optimal value of learning coefficient is suggested and with this value, the gradient

  6. Conditional discrimination learning: A critique and amplification

    OpenAIRE

    Schrier, Allan M.; Thompson, Claudia R.

    1980-01-01

    Carter and Werner recently reviewed the literature on conditional discrimination learning by pigeons, which consists of studies of matching-to-sample and oddity-from-sample. They also discussed three models of such learning: the “multiple-rule” model (learning of stimulus-specific relations), the “configuration” model, and the “single-rule” model (concept learning). Although their treatment of the multiple-rule model, which seems most applicable to the pigeon data, is generally excellent, the...

  7. Cultural Learning Redux.

    Science.gov (United States)

    Tomasello, Michael

    2016-05-01

    M. Tomasello, A. Kruger, and H. Ratner (1993) proposed a theory of cultural learning comprising imitative learning, instructed learning, and collaborative learning. Empirical and theoretical advances in the past 20 years suggest modifications to the theory; for example, children do not just imitate but overimitate in order to identify and affiliate with others in their cultural group, children learn from pedagogy not just episodic facts but the generic structure of their cultural worlds, and children collaboratively co-construct with those in their culture normative rules for doing things. In all, human children do not just culturally learn useful instrumental activities and information, they conform to the normative expectations of the cultural group and even contribute themselves to the creation of such normative expectations. © 2016 The Author. Child Development © 2016 Society for Research in Child Development, Inc.

  8. Module Six: Parallel Circuits; Basic Electricity and Electronics Individualized Learning System.

    Science.gov (United States)

    Bureau of Naval Personnel, Washington, DC.

    In this module the student will learn the rules that govern the characteristics of parallel circuits; the relationships between voltage, current, resistance and power; and the results of common troubles in parallel circuits. The module is divided into four lessons: rules of voltage and current, rules for resistance and power, variational analysis,…

  9. Delayed rule following

    OpenAIRE

    Schmitt, David R.

    2001-01-01

    Although the elements of a fully stated rule (discriminative stimulus [SD], some behavior, and a consequence) can occur nearly contemporaneously with the statement of the rule, there is often a delay between the rule statement and the SD. The effects of this delay on rule following have not been studied in behavior analysis, but they have been investigated in rule-like settings in the areas of prospective memory (remembering to do something in the future) and goal pursuit. Discriminative even...

  10. Robot Grasp Learning by Demonstration without Predefined Rules

    Directory of Open Access Journals (Sweden)

    César Fernández

    2011-12-01

    Full Text Available A learning-based approach to autonomous robot grasping is presented. Pattern recognition techniques are used to measure the similarity between a set of previously stored example grasps and all the possible candidate grasps for a new object. Two sets of features are defined in order to characterize grasps: point attributes describe the surroundings of a contact point; point-set attributes describe the relationship between the set of n contact points (assuming an n-fingered robot gripper is used. In the experiments performed, the nearest neighbour classifier outperforms other approaches like multilayer perceptrons, radial basis functions or decision trees, in terms of classification accuracy, while computational load is not excessive for a real time application (a grasp is fully synthesized in 0.2 seconds. The results obtained on a synthetic database show that the proposed system is able to imitate the grasping behaviour of the user (e.g. the system learns to grasp a mug by its handle. All the code has been made available for testing purposes.

  11. Social inference and social anxiety: evidence of a fear-congruent self-referential learning bias.

    Science.gov (United States)

    Button, Katherine S; Browning, Michael; Munafò, Marcus R; Lewis, Glyn

    2012-12-01

    Fears of negative evaluation characterise social anxiety, and preferential processing of fear-relevant information is implicated in maintaining symptoms. Little is known, however, about the relationship between social anxiety and the process of inferring negative evaluation. The ability to use social information to learn what others think about one, referred to here as self-referential learning, is fundamental for effective social interaction. The aim of this research was to examine whether social anxiety is associated with self-referential learning. 102 Females with either high (n = 52) or low (n = 50) self-reported social anxiety completed a novel probabilistic social learning task. Using trial and error, the task required participants to learn two self-referential rules, 'I am liked' and 'I am disliked'. Participants across the sample were better at learning the positive rule 'I am liked' than the negative rule 'I am disliked', β = -6.4, 95% CI [-8.0, -4.7], p learning positive self-referential information was strongest in the lowest socially anxious and was abolished in the most symptomatic participants. Relative to the low group, the high anxiety group were better at learning they were disliked and worse at learning they were liked, social anxiety by rule interaction β = 3.6; 95% CI [+0.3, +7.0], p = 0.03. The specificity of the results to self-referential processing requires further research. Healthy individuals show a robust preference for learning that they are liked relative to disliked. This positive self-referential bias is reduced in social anxiety in a way that would be expected to exacerbate anxiety symptoms. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Implicit Procedural Learning in Fragile X and Down Syndrome

    Science.gov (United States)

    Bussy, G.; Charrin, E.; Brun, A.; Curie, A.; des Portes, V.

    2011-01-01

    Background: Procedural learning refers to rule-based motor skill learning and storage. It involves the cerebellum, striatum and motor areas of the frontal lobe network. Fragile X syndrome, which has been linked with anatomical abnormalities within the striatum, may result in implicit procedural learning deficit. Methods: To address this issue, a…

  13. Instructional control of reinforcement learning: A behavioral and neurocomputational investigation

    NARCIS (Netherlands)

    Doll, B.B.; Jacobs, W.J.; Sanfey, A.G.; Frank, M.J.

    2009-01-01

    Humans learn how to behave directly through environmental experience and indirectly through rules and instructions. Behavior analytic research has shown that instructions can control behavior, even when such behavior leads to sub-optimal outcomes (Hayes, S (Ed) 1989. Rule-governed behavior:

  14. Fuzzy gain scheduling of velocity PI controller with intelligent learning algorithm for reactor control

    International Nuclear Information System (INIS)

    Dong Yun Kim; Poong Hyun Seong; .

    1997-01-01

    In this research, we propose a fuzzy gain scheduler (FGS) with an intelligent learning algorithm for a reactor control. In the proposed algorithm, the gradient descent method is used in order to generate the rule bases of a fuzzy algorithm by learning. These rule bases are obtained by minimizing an objective function, which is called a performance cost function. The objective of the FGS with an intelligent learning algorithm is to generate gains, which minimize the error of system. The proposed algorithm can reduce the time and effort required for obtaining the fuzzy rules through the intelligent learning function. It is applied to reactor control of nuclear power plant (NPP), and the results are compared with those of a conventional PI controller with fixed gains. As a result, it is shown that the proposed algorithm is superior to the conventional PI controller. (author)

  15. Attentional Bias in Human Category Learning: The Case of Deep Learning.

    Science.gov (United States)

    Hanson, Catherine; Caglar, Leyla Roskan; Hanson, Stephen José

    2018-01-01

    Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This "failure" to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures

  16. Attentional Bias in Human Category Learning: The Case of Deep Learning

    Directory of Open Access Journals (Sweden)

    Catherine Hanson

    2018-04-01

    Full Text Available Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987 showed that stimuli can have structures with features that are statistically uncorrelated (separable or statistically correlated (integral within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974. In contrast to humans, a single hidden layer backpropagation (BP neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993. This “failure” to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1 by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2 by investigating whether a Deep Learning (DL network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc., would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993. Second, we show that using the same low dimensional stimuli, Deep Learning (DL, unlike BP but similar to humans, learns separable category structures more quickly than integral category

  17. The Biological Basis of Learning and Individuality.

    Science.gov (United States)

    Kandel, Eric R.; Hawkins, Robert D.

    1992-01-01

    Describes the biological basis of learning and individuality. Presents an overview of recent discoveries that suggest learning engages a simple set of rules that modify the strength of connection between neurons in the brain. The changes are cited as playing an important role in making each individual unique. (MCO)

  18. Action Rules Mining

    CERN Document Server

    Dardzinska, Agnieszka

    2013-01-01

    We are surrounded by data, numerical, categorical and otherwise, which must to be analyzed and processed to convert it into information that instructs, answers or aids understanding and decision making. Data analysts in many disciplines such as business, education or medicine, are frequently asked to analyze new data sets which are often composed of numerous tables possessing different properties. They try to find completely new correlations between attributes and show new possibilities for users.   Action rules mining discusses some of data mining and knowledge discovery principles and then describe representative concepts, methods and algorithms connected with action. The author introduces the formal definition of action rule, notion of a simple association action rule and a representative action rule, the cost of association action rule, and gives a strategy how to construct simple association action rules of a lowest cost. A new approach for generating action rules from datasets with numerical attributes...

  19. Mere exposure alters category learning of novel objects

    Directory of Open Access Journals (Sweden)

    Jonathan R Folstein

    2010-08-01

    Full Text Available We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning.

  20. Mere exposure alters category learning of novel objects.

    Science.gov (United States)

    Folstein, Jonathan R; Gauthier, Isabel; Palmeri, Thomas J

    2010-01-01

    We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning.

  1. Discovering rules for protein-ligand specificity using support vector inductive logic programming.

    Science.gov (United States)

    Kelley, Lawrence A; Shrimpton, Paul J; Muggleton, Stephen H; Sternberg, Michael J E

    2009-09-01

    Structural genomics initiatives are rapidly generating vast numbers of protein structures. Comparative modelling is also capable of producing accurate structural models for many protein sequences. However, for many of the known structures, functions are not yet determined, and in many modelling tasks, an accurate structural model does not necessarily tell us about function. Thus, there is a pressing need for high-throughput methods for determining function from structure. The spatial arrangement of key amino acids in a folded protein, on the surface or buried in clefts, is often the determinants of its biological function. A central aim of molecular biology is to understand the relationship between such substructures or surfaces and biological function, leading both to function prediction and to function design. We present a new general method for discovering the features of binding pockets that confer specificity for particular ligands. Using a recently developed machine-learning technique which couples the rule-discovery approach of inductive logic programming with the statistical learning power of support vector machines, we are able to discriminate, with high precision (90%) and recall (86%) between pockets that bind FAD and those that bind NAD on a large benchmark set given only the geometry and composition of the backbone of the binding pocket without the use of docking. In addition, we learn rules governing this specificity which can feed into protein functional design protocols. An analysis of the rules found suggests that key features of the binding pocket may be tied to conformational freedom in the ligand. The representation is sufficiently general to be applicable to any discriminatory binding problem. All programs and data sets are freely available to non-commercial users at http://www.sbg.bio.ic.ac.uk/svilp_ligand/.

  2. Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization

    KAUST Repository

    Gower, Robert M.

    2018-02-12

    We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite matrices in such a way that all iterates (approximate solutions) generated by the algorithm are positive definite matrices themselves. This opens the way for many applications in the field of optimization and machine learning. As an application of our general theory, we develop the {\\\\em first accelerated (deterministic and stochastic) quasi-Newton updates}. Our updates lead to provably more aggressive approximations of the inverse Hessian, and lead to speed-ups over classical non-accelerated rules in numerical experiments. Experiments with empirical risk minimization show that our rules can accelerate training of machine learning models.

  3. Young children consider individual authority and collective agreement when deciding who can change rules.

    Science.gov (United States)

    Zhao, Xin; Kushnir, Tamar

    2018-01-01

    Young children demonstrate awareness of normativity in various domains of social learning. It is unclear, however, whether children recognize that rules can be changed in certain contexts and by certain people or groups. Across three studies, we provided empirical evidence that children consider individual authority and collective agreement when reasoning about who can change rules. In Study 1, children aged 4-7years watched videos of children playing simply sorting and stacking games in groups or alone. Across conditions, the group game was initiated (a) by one child, (b) by collaborative agreement, or (c) by an adult authority figure. In the group games with a rule initiated by one child, children attributed ability to change rules only to that individual and not his or her friends, and they mentioned ownership and authority in their explanations. When the rule was initiated collaboratively, older children said that no individual could change the rule, whereas younger children said that either individual could do so. When an adult initiated the rule, children stated that only the adult could change it. In contrast, children always endorsed a child's decision to change his or her own solitary rule and never endorsed any child's ability to change moral and conventional rules in daily life. Age differences corresponded to beliefs about friendship and agreement in peer play (Study 2) and disappeared when the decision process behind and normative force of collaboratively initiated rules were clarified (Study 3). These results show important connections between normativity and considerations of authority and collaboration during early childhood. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Evolving rule-based systems in two medical domains using genetic programming.

    Science.gov (United States)

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan; Axer, Hubertus; Bjerregaard, Beth; von Keyserlingk, Diedrich Graf

    2004-11-01

    To demonstrate and compare the application of different genetic programming (GP) based intelligent methodologies for the construction of rule-based systems in two medical domains: the diagnosis of aphasia's subtypes and the classification of pap-smear examinations. Past data representing (a) successful diagnosis of aphasia's subtypes from collaborating medical experts through a free interview per patient, and (b) correctly classified smears (images of cells) by cyto-technologists, previously stained using the Papanicolaou method. Initially a hybrid approach is proposed, which combines standard genetic programming and heuristic hierarchical crisp rule-base construction. Then, genetic programming for the production of crisp rule based systems is attempted. Finally, another hybrid intelligent model is composed by a grammar driven genetic programming system for the generation of fuzzy rule-based systems. Results denote the effectiveness of the proposed systems, while they are also compared for their efficiency, accuracy and comprehensibility, to those of an inductive machine learning approach as well as to those of a standard genetic programming symbolic expression approach. The proposed GP-based intelligent methodologies are able to produce accurate and comprehensible results for medical experts performing competitive to other intelligent approaches. The aim of the authors was the production of accurate but also sensible decision rules that could potentially help medical doctors to extract conclusions, even at the expense of a higher classification score achievement.

  5. Vicarious Neural Processing of Outcomes during Observational Learning

    NARCIS (Netherlands)

    Monfardini, Elisabetta; Gazzola, Valeria; Boussaoud, Driss; Brovelli, Andrea; Keysers, Christian; Wicker, Bruno

    2013-01-01

    Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on

  6. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    Science.gov (United States)

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015

  7. Artificial neural systems using memristive synapses and nano-crystalline silicon thin-film transistors

    Science.gov (United States)

    Cantley, Kurtis D.

    Future computer systems will not rely solely on digital processing of inputs from well-defined data sets. They will also be required to perform various computational tasks using large sets of ill-defined information from the complex environment around them. The most efficient processor of this type of information known today is the human brain. Using a large number of primitive elements (˜1010 neurons in the neocortex) with high parallel connectivity (each neuron has ˜104 synapses), brains have the remarkable ability to recognize and classify patterns, predict outcomes, and learn from and adapt to incredibly diverse sets of problems. A reasonable goal in the push to increase processing power of electronic systems would thus be to implement artificial neural networks in hardware that are compatible with today's digital processors. This work focuses on the feasibility of utilizing non-crystalline silicon devices in neuromorphic electronics. Hydrogenated amorphous silicon (a-Si:H) nanowire transistors with Schottky barrier source/drain junctions, as well as a-Si:H/Ag resistive switches are fabricated and characterized. In the transistors, it is found that the on-current scales linearly with the effective width W eff of the channel nanowire array down to at least 20 nm. The solid-state electrolyte resistive switches (memristors) are shown to exhibit the proper current-voltage hysteresis. SPICE models of similar devices are subsequently developed to investigate their performance in neural circuits. The resulting SPICE simulations demonstrate spiking properties and synaptic learning rules that are incredibly similar to those in biology. Specifically, the neuron circuits can be designed to mimic the firing characteristics of real neurons, and Hebbian learning rules are investigated. Finally, some applications are presented, including associative learning analogous to the classical conditioning experiments originally performed by Pavlov, and frequency and pattern

  8. Learning to predict chemical reactions.

    Science.gov (United States)

    Kayala, Matthew A; Azencott, Chloé-Agathe; Chen, Jonathan H; Baldi, Pierre

    2011-09-26

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles, respectively, are not high throughput, are not generalizable or scalable, and lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry data set consisting of 1630 full multistep reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top-ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of nonproductive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system

  9. Learning to Predict Chemical Reactions

    Science.gov (United States)

    Kayala, Matthew A.; Azencott, Chloé-Agathe; Chen, Jonathan H.

    2011-01-01

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles respectively are not high-throughput, are not generalizable or scalable, or lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry dataset consisting of 1630 full multi-step reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval, problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of non-productive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system

  10. Individual Difference Factors in the Learning and Transfer of Patterning Discriminations

    Directory of Open Access Journals (Sweden)

    Elisa Maes

    2017-07-01

    Full Text Available In an associative patterning task, some people seem to focus more on learning an overarching rule, whereas others seem to focus on acquiring specific relations between the stimuli and outcomes involved. Building on earlier work, we further investigated which cognitive factors are involved in feature- vs. rule-based learning and generalization. To this end, we measured participants' tendency to generalize according to the rule of opposites after training on negative and positive patterning problems (i.e., A+/B+/AB− and C−/D−/CD+, their tendency to attend to global aspects or local details of stimuli, their systemizing disposition and their score on the Raven intelligence test. Our results suggest that while intelligence might have some influence on patterning learning and generalization, visual processing style and systemizing disposition do not. We discuss our findings in the light of previous observations on patterning.

  11. Proof of Kochen–Specker Theorem: Conversion of Product Rule to Sum Rule

    International Nuclear Information System (INIS)

    Toh, S.P.; Zainuddin, Hishamuddin

    2009-01-01

    Valuation functions of observables in quantum mechanics are often expected to obey two constraints called the sum rule and product rule. However, the Kochen–Specker (KS) theorem shows that for a Hilbert space of quantum mechanics of dimension d ≤ 3, these constraints contradict individually with the assumption of value definiteness. The two rules are not irrelated and Peres [Found. Phys. 26 (1996) 807] has conceived a method of converting the product rule into a sum rule for the case of two qubits. Here we apply this method to a proof provided by Mermin based on the product rule for a three-qubit system involving nine operators. We provide the conversion of this proof to one based on sum rule involving ten operators. (general)

  12. Rough set and rule-based multicriteria decision aiding

    Directory of Open Access Journals (Sweden)

    Roman Slowinski

    2012-08-01

    Full Text Available The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA. DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems.

  13. Optical implementation of neural learning algorithms based on cross-gain modulation in a semiconductor optical amplifier

    Science.gov (United States)

    Li, Qiang; Wang, Zhi; Le, Yansi; Sun, Chonghui; Song, Xiaojia; Wu, Chongqing

    2016-10-01

    Neuromorphic engineering has a wide range of applications in the fields of machine learning, pattern recognition, adaptive control, etc. Photonics, characterized by its high speed, wide bandwidth, low power consumption and massive parallelism, is an ideal way to realize ultrafast spiking neural networks (SNNs). Synaptic plasticity is believed to be critical for learning, memory and development in neural circuits. Experimental results have shown that changes of synapse are highly dependent on the relative timing of pre- and postsynaptic spikes. Synaptic plasticity in which presynaptic spikes preceding postsynaptic spikes results in strengthening, while the opposite timing results in weakening is called antisymmetric spike-timing-dependent plasticity (STDP) learning rule. And synaptic plasticity has the opposite effect under the same conditions is called antisymmetric anti-STDP learning rule. We proposed and experimentally demonstrated an optical implementation of neural learning algorithms, which can achieve both of antisymmetric STDP and anti-STDP learning rule, based on the cross-gain modulation (XGM) within a single semiconductor optical amplifier (SOA). The weight and height of the potentitation and depression window can be controlled by adjusting the injection current of the SOA, to mimic the biological antisymmetric STDP and anti-STDP learning rule more realistically. As the injection current increases, the width of depression and potentitation window decreases and height increases, due to the decreasing of recovery time and increasing of gain under a stronger injection current. Based on the demonstrated optical STDP circuit, ultrafast learning in optical SNNs can be realized.

  14. Disability, technology and e-learning: challenging conceptions

    Directory of Open Access Journals (Sweden)

    Jane Seale

    2006-12-01

    Full Text Available In considering the role that technology and e-learning can play in helping students access higher education and an effective learning experience, a large amount of the current research and practice literature focuses almost exclusively on accessibility legislation, guidelines and standards, and the rules contained within them (Abascal et al., 2004; Chisholm & Brewer, 2005; Gunderson & May, 2005; Paolucci, 2004; Reed et al., 2004; Slatin, 2005. One of the major problems of such an approach is that it has drawn higher education practitioners into thinking that their objective is to comply with rules. I argue that it is not (Seale, 2006. The objective should be to address the needs of students. The danger of only focusing on rules is that it can constrain thinking and therefore practice. We need to expand our thinking beyond that of how to comply with rules, towards how to meet the needs of students with disabilities, within the local contexts that students and practitioners are working. In thinking about how to meet the needs of students with disabilities, practitioners will need to develop their own tools. These tools might be user case studies, evaluation methodologies or conceptualizations:

  15. Using Rule-Based Computer Programming to Unify Communication Rules Research.

    Science.gov (United States)

    Sanford, David L.; Roach, J. W.

    This paper proposes the use of a rule-based computer programming language as a standard for the expression of rules, arguing that the adoption of a standard would enable researchers to communicate about rules in a consistent and significant way. Focusing on the formal equivalence of artificial intelligence (AI) programming to different types of…

  16. A learning rule that explains how rewards teach attention

    NARCIS (Netherlands)

    Rombouts, Jaldert O.; Bohte, Sander M.; Martinez-Trujillo, Julio; Roelfsema, Pieter R.

    2015-01-01

    Many theories propose that top-down attentional signals control processing in sensory cortices by modulating neural activity. But who controls the controller? Here we investigate how a biologically plausible neural reinforcement learning scheme can create higher order representations and top-down

  17. A learning rule that explains how rewards teach attention

    NARCIS (Netherlands)

    J.O. Rombouts (Jaldert); S.M. Bohte (Sander); J. Martinez-Trujillo; P.R. Roelfsema

    2015-01-01

    htmlabstractMany theories propose that top-down attentional signals control processing in sensory cortices by modulating neural activity. But who controls the controller? Here we investigate how a biologically plausible neural reinforcement learning scheme can create higher order representations and

  18. Comparing Product Category Rules from Different Programs: Learned Outcomes Towards Global Alignment (Presentation)

    Science.gov (United States)

    Purpose Product category rules (PCRs) provide category-specific guidance for estimating and reporting product life cycle environmental impacts, typically in the form of environmental product declarations and product carbon footprints. Lack of global harmonization between PCRs or ...

  19. Convention on nuclear safety. Rules of procedure and financial rules

    International Nuclear Information System (INIS)

    1998-01-01

    The document presents the Rules of Procedure and Financial Rules that apply mutatis mutandis to any meeting of the Contracting Parties to the Convention on Nuclear Safety (INFCIRC/449) convened in accordance with Chapter 3 of the Convention. It includes four parts: General provisions, Preparatory process for review meetings, Review meetings, and Amendment and interpretation of rules

  20. Delayed rule following.

    Science.gov (United States)

    Schmitt, D R

    2001-01-01

    Although the elements of a fully stated rule (discriminative stimulus [S(D)], some behavior, and a consequence) can occur nearly contemporaneously with the statement of the rule, there is often a delay between the rule statement and the S(D). The effects of this delay on rule following have not been studied in behavior analysis, but they have been investigated in rule-like settings in the areas of prospective memory (remembering to do something in the future) and goal pursuit. Discriminative events for some behavior can be event based (a specific setting stimulus) or time based. The latter are more demanding with respect to intention following and show age-related deficits. Studies suggest that the specificity with which the components of a rule (termed intention) are stated has a substantial effect on intention following, with more detailed specifications increasing following. Reminders of an intention, too, are most effective when they refer specifically to both the behavior and its occasion. Covert review and written notes are two effective strategies for remembering everyday intentions, but people who use notes appear not to be able to switch quickly to covert review. By focusing on aspects of the setting and rule structure, research on prospective memory and goal pursuit expands the agenda for a more complete explanation of rule effects.

  1. A neural network model of lateralization during letter identification.

    Science.gov (United States)

    Shevtsova, N; Reggia, J A

    1999-03-01

    The causes of cerebral lateralization of cognitive and other functions are currently not well understood. To investigate one aspect of function lateralization, a bihemispheric neural network model for a simple visual identification task was developed that has two parallel interacting paths of information processing. The model is based on commonly accepted concepts concerning neural connectivity, activity dynamics, and synaptic plasticity. A combination of both unsupervised (Hebbian) and supervised (Widrow-Hoff) learning rules is used to train the model to identify a small set of letters presented as input stimuli in the left visual hemifield, in the central position, and in the right visual hemifield. Each visual hemifield projects onto the contralateral hemisphere, and the two hemispheres interact via a simulated corpus callosum. The contribution of each individual hemisphere to the process of input stimuli identification was studied for a variety of underlying asymmetries. The results indicate that multiple asymmetries may cause lateralization. Lateralization occurred toward the side having larger size, higher excitability, or higher learning rate parameters. It appeared more intensively with strong inhibitory callosal connections, supporting the hypothesis that the corpus callosum plays a functionally inhibitory role. The model demonstrates clearly the dependence of lateralization on different hemisphere parameters and suggests that computational models can be useful in better understanding the mechanisms underlying emergence of lateralization.

  2. Automation of information decision support to improve e-learning resources quality

    Directory of Open Access Journals (Sweden)

    A.L. Danchenko

    2013-06-01

    Full Text Available Purpose. In conditions of active development of e-learning the high quality of e-learning resources is very important. Providing the high quality of e-learning resources in situation with mass higher education and rapid obsolescence of information requires the automation of information decision support for improving the quality of e-learning resources by development of decision support system. Methodology. The problem is solved by methods of artificial intelligence. The knowledge base of information structure of decision support system that is based on frame model of knowledge representation and inference production rules are developed. Findings. According to the results of the analysis of life cycle processes and requirements to the e-learning resources quality the information model of the structure of the knowledge base of the decision support system, the inference rules for the automatically generating of recommendations and the software implementation are developed. Practical value. It is established that the basic requirements for quality are performance, validity, reliability and manufacturability. It is shown that the using of a software implementation of decision support system for researched courses gives a growth of the quality according to the complex quality criteria. The information structure of a knowledge base system to support decision-making and rules of inference can be used by methodologists and content developers of learning systems.

  3. Hamburg rules V Hague Visby rules an English perspective

    OpenAIRE

    Tozaj Dorian; Xhelilaj Ermal

    2010-01-01

    It has often been argued for the effect of defences provided to carriers under Art IV (2) of Hague Visby Rules to almost nullify the protection guaranteed to shippers in other provisions of this convention. Therefore an all embracing universal shipper friendly convention, merely the Hamburg Rules, need be incorporated in all countries in order to address this issue and fully satisfy the intentions of the parties for the establishment of international rules in international trade

  4. Online unsupervised formation of cell assemblies for the encoding of multiple cognitive maps.

    Science.gov (United States)

    Salihoglu, Utku; Bersini, Hugues; Yamaguchi, Yoko; Molter, Colin

    2009-01-01

    Since their introduction sixty years ago, cell assemblies have proved to be a powerful paradigm for brain information processing. After their introduction in artificial intelligence, cell assemblies became commonly used in computational neuroscience as a neural substrate for content addressable memories. However, the mechanisms underlying their formation are poorly understood and, so far, there is no biologically plausible algorithms which can explain how external stimuli can be online stored in cell assemblies. We addressed this question in a previous paper [Salihoglu, U., Bersini, H., Yamaguchi, Y., Molter, C., (2009). A model for the cognitive map formation: Application of the retroaxonal theory. In Proc. IEEE international joint conference on neural networks], were, based on biologically plausible mechanisms, a novel unsupervised algorithm for online cell assemblies' creation was developed. The procedure involved simultaneously, a fast Hebbian/anti-Hebbian learning of the network's recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilized the cell assemblies by learning the feedforward input connections. Here, we first quantify the role played by the retroaxonal feedback mechanism. Then, we show how multiple cognitive maps, composed by a set of orthogonal input stimuli, can be encoded in the network. As a result, when facing a previously learned input, the system is able to retrieve the cognitive map it belongs to. As a consequence, ambiguous inputs which could belong to multiple cognitive maps can be disambiguated by the knowledge of the context, i.e. the cognitive map.

  5. Fuzzy gain scheduling of velocity PI controller with intelligent learning algorithm for reactor control

    International Nuclear Information System (INIS)

    Kim, Dong Yun

    1997-02-01

    In this research, we propose a fuzzy gain scheduler (FGS) with an intelligent learning algorithm for a reactor control. In the proposed algorithm, the gradient descent method is used in order to generate the rule bases of a fuzzy algorithm by learning. These rule bases are obtained by minimizing an objective function, which is called a performance cost function. The objective of the FGS with an intelligent learning algorithm is to generate adequate gains, which minimize the error of system. The proposed algorithm can reduce the time and efforts required for obtaining the fuzzy rules through the intelligent learning function. The evolutionary programming algorithm is modified and adopted as the method in order to find the optimal gains which are used as the initial gains of FGS with learning function. It is applied to reactor control of nuclear power plant (NPP), and the results are compared with those of a conventional PI controller with fixed gains. As a result, it is shown that the proposed algorithm is superior to the conventional PI controller

  6. Flowshop Scheduling Problems with a Position-Dependent Exponential Learning Effect

    Directory of Open Access Journals (Sweden)

    Mingbao Cheng

    2013-01-01

    Full Text Available We consider a permutation flowshop scheduling problem with a position-dependent exponential learning effect. The objective is to minimize the performance criteria of makespan and the total flow time. For the two-machine flow shop scheduling case, we show that Johnson’s rule is not an optimal algorithm for minimizing the makespan given the exponential learning effect. Furthermore, by using the shortest total processing times first (STPT rule, we construct the worst-case performance ratios for both criteria. Finally, a polynomial-time algorithm is proposed for special cases of the studied problem.

  7. Resolving task rule incongruence during task switching by competitor rule suppression.

    Science.gov (United States)

    Meiran, Nachshon; Hsieh, Shulan; Dimov, Eduard

    2010-07-01

    Task switching requires maintaining readiness to execute any task of a given set of tasks. However, when tasks switch, the readiness to execute the now-irrelevant task generates interference, as seen in the task rule incongruence effect. Overcoming such interference requires fine-tuned inhibition that impairs task readiness only minimally. In an experiment involving 2 object classification tasks and 2 location classification tasks, the authors show that irrelevant task rules that generate response conflicts are inhibited. This competitor rule suppression (CRS) is seen in response slowing in subsequent trials, when the competing rules become relevant. CRS is shown to operate on specific rules without affecting similar rules. CRS and backward inhibition, which is another inhibitory phenomenon, produced additive effects on reaction time, suggesting their mutual independence. Implications for current formal theories of task switching as well as for conflict monitoring theories are discussed. (c) 2010 APA, all rights reserved

  8. Bilingualism trains specific brain circuits involved in flexible rule selection and application.

    Science.gov (United States)

    Stocco, Andrea; Prat, Chantel S

    2014-10-01

    Bilingual individuals have been shown to outperform monolinguals on a variety of tasks that measure non-linguistic executive functioning, suggesting that some facets of the bilingual experience give rise to generalized improvements in cognitive performance. The current study investigated the hypothesis that such advantage in executive functioning arises from the need to flexibly select and apply rules when speaking multiple languages. Such flexible behavior may strengthen the functioning of the fronto-striatal loops that direct signals to the prefrontal cortex. To test this hypothesis, we compared behavioral and brain data from proficient bilinguals and monolinguals who performed a Rapid Instructed Task Learning paradigm, which requires behaving according to ever-changing rules. Consistent with our hypothesis, bilinguals were faster than monolinguals when executing novel rules, and this improvement was associated with greater modulation of activity in the basal ganglia. The implications of these findings for language and executive function research are discussed herein. Published by Elsevier Inc.

  9. Visual artificial grammar learning: comparative research on humans, kea (Nestor notabilis) and pigeons (Columba livia)

    Science.gov (United States)

    Stobbe, Nina; Westphal-Fitch, Gesche; Aust, Ulrike; Fitch, W. Tecumseh

    2012-01-01

    Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training. PMID:22688635

  10. Electronuclear sum rules

    International Nuclear Information System (INIS)

    Arenhoevel, H.; Drechsel, D.; Weber, H.J.

    1978-01-01

    Generalized sum rules are derived by integrating the electromagnetic structure functions along lines of constant ratio of momentum and energy transfer. For non-relativistic systems these sum rules are related to the conventional photonuclear sum rules by a scaling transformation. The generalized sum rules are connected with the absorptive part of the forward scattering amplitude of virtual photons. The analytic structure of the scattering amplitudes and the possible existence of dispersion relations have been investigated in schematic relativistic and non-relativistic models. While for the non-relativistic case analyticity does not hold, the relativistic scattering amplitude is analytical for time-like (but not for space-like) photons and relations similar to the Gell-Mann-Goldberger-Thirring sum rule exist. (Auth.)

  11. The Implementation of "The n-term" Formula to Improve Student Ability in Determining the Rules of a Numeric Sequence

    Science.gov (United States)

    In'am, Akhsanul; Hajar, Siti

    2013-01-01

    A good-quality teacher may determines a good-quality learning, thus good-quality students will be the results. In order to have a good-quality learning, a lot of strategies and methods can be adopted. The objective of this research is to improve students' ability in determining the rules of a numeric sequence and analysing the effectiveness of the…

  12. Developmental changes in children's normative reasoning across learning contexts and collaborative roles.

    Science.gov (United States)

    Riggs, Anne E; Young, Andrew G

    2016-08-01

    What influences children's normative judgments of conventional rules at different points in development? The current study explored the effects of two contextual factors on children's normative reasoning: the way in which the rules were learned and whether the rules apply to the self or others. Peer dyads practiced a novel collaborative board game comprising two complementary roles. Dyads were either taught both the prescriptive (i.e., what to do) and proscriptive (i.e., what not to do) forms of the rules, taught only the prescriptive form of the rules, or created the rules themselves. Children then judged whether third parties were violating or conforming to the rules governing their own roles and their partner's roles. Early school-aged children's (6- to 7-year-olds; N = 60) normative judgments were strongest when they had been taught the rules (with or without the proscriptive form), but were more flexible for rules they created themselves. Preschool-aged children's (4- to 5-year-olds; N = 60) normative judgments, however, were strongest when they were taught both the prescriptive and proscriptive forms of the rules. Additionally, preschoolers exhibited stronger normative judgments when the rules governed their own roles rather than their partner's roles, whereas school-aged children treated all rules as equally normative. These results demonstrate that children's normative reasoning is contingent on contextual factors of the learning environment and, moreover, highlight 2 specific areas in which children's inferences about the normativity of conventions strengthen over development. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Choosing the rules: distinct and overlapping frontoparietal representations of task rules for perceptual decisions.

    Science.gov (United States)

    Zhang, Jiaxiang; Kriegeskorte, Nikolaus; Carlin, Johan D; Rowe, James B

    2013-07-17

    Behavior is governed by rules that associate stimuli with responses and outcomes. Human and monkey studies have shown that rule-specific information is widely represented in the frontoparietal cortex. However, it is not known how establishing a rule under different contexts affects its neural representation. Here, we use event-related functional MRI (fMRI) and multivoxel pattern classification methods to investigate the human brain's mechanisms of establishing and maintaining rules for multiple perceptual decision tasks. Rules were either chosen by participants or specifically instructed to them, and the fMRI activation patterns representing rule-specific information were compared between these contexts. We show that frontoparietal regions differ in the properties of their rule representations during active maintenance before execution. First, rule-specific information maintained in the dorsolateral and medial frontal cortex depends on the context in which it was established (chosen vs specified). Second, rule representations maintained in the ventrolateral frontal and parietal cortex are independent of the context in which they were established. Furthermore, we found that the rule-specific coding maintained in anticipation of stimuli may change with execution of the rule: representations in context-independent regions remain invariant from maintenance to execution stages, whereas rule representations in context-dependent regions do not generalize to execution stage. The identification of distinct frontoparietal systems with context-independent and context-dependent task rule representations, and the distinction between anticipatory and executive rule representations, provide new insights into the functional architecture of goal-directed behavior.

  14. Smooth criminal: convicted rule-breakers show reduced cognitive conflict during deliberate rule violations.

    Science.gov (United States)

    Jusyte, Aiste; Pfister, Roland; Mayer, Sarah V; Schwarz, Katharina A; Wirth, Robert; Kunde, Wilfried; Schönenberg, Michael

    2017-09-01

    Classic findings on conformity and obedience document a strong and automatic drive of human agents to follow any type of rule or social norm. At the same time, most individuals tend to violate rules on occasion, and such deliberate rule violations have recently been shown to yield cognitive conflict for the rule-breaker. These findings indicate persistent difficulty to suppress the rule representation, even though rule violations were studied in a controlled experimental setting with neither gains nor possible sanctions for violators. In the current study, we validate these findings by showing that convicted criminals, i.e., individuals with a history of habitual and severe forms of rule violations, can free themselves from such cognitive conflict in a similarly controlled laboratory task. These findings support an emerging view that aims at understanding rule violations from the perspective of the violating agent rather than from the perspective of outside observer.

  15. Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks

    Science.gov (United States)

    Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.

    2015-01-01

    The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies

  16. Energy Management Strategy for a Hybrid Electric Vehicle Based on Deep Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Yue Hu

    2018-01-01

    Full Text Available An energy management strategy (EMS is important for hybrid electric vehicles (HEVs since it plays a decisive role on the performance of the vehicle. However, the variation of future driving conditions deeply influences the effectiveness of the EMS. Most existing EMS methods simply follow predefined rules that are not adaptive to different driving conditions online. Therefore, it is useful that the EMS can learn from the environment or driving cycle. In this paper, a deep reinforcement learning (DRL-based EMS is designed such that it can learn to select actions directly from the states without any prediction or predefined rules. Furthermore, a DRL-based online learning architecture is presented. It is significant for applying the DRL algorithm in HEV energy management under different driving conditions. Simulation experiments have been conducted using MATLAB and Advanced Vehicle Simulator (ADVISOR co-simulation. Experimental results validate the effectiveness of the DRL-based EMS compared with the rule-based EMS in terms of fuel economy. The online learning architecture is also proved to be effective. The proposed method ensures the optimality, as well as real-time applicability, in HEVs.

  17. "Smart inhibition": electrophysiological evidence for the suppression of conflict-generating task rules during task switching.

    Science.gov (United States)

    Meiran, Nachshon; Hsieh, Shulan; Chang, Chi-Chih

    2011-09-01

    A major challenge for task switching is maintaining a balance between high task readiness and effectively ignoring irrelevant task rules. This calls for finely tuned inhibition that targets only the source of interference without adversely influencing other task-related representations. The authors show that irrelevant task rules generating response conflict are inhibited, causing their inefficient execution on the next trial (indicating the presence of competitor rule suppression[CRS];Meiran, Hsieh, & Dimov, Journal of Experimental Psychology: Learning, Memory and Cognition, 36, 992-1002, 2010). To determine whether CRS influences task rules, rather than target stimuli or responses, the authors focused on the processing of the task cue before the target stimulus was presented and before the response could be chosen. As was predicted, CRS was found in the event-related potentials in two time windows during task cue processing. It was also found in three time windows after target presentation. Source localization analyses suggest the involvement of the right dorsal prefrontal cortex in all five time windows.

  18. Detecting anomalous nuclear materials accounting transactions: Applying machine learning to plutonium processing facilities

    International Nuclear Information System (INIS)

    Vaccaro, H.S.

    1989-01-01

    Nuclear materials accountancy is the only safeguards measure that provides direct evidence of the status of nuclear materials. Of the six categories that gives rise to inventory differences, the technical capability is now in place to implement the technical innovations necessary to reduce the human error categories. There are really three main approaches to detecting anomalies in materials control and accountability (MC ampersand A) data: (1) Statistical: numeric methods such as the Page's Test, CUSUM, CUMUF, SITMUF, etc., can detect anomalies in metric (numeric) data. (2) Expert systems: Human expert's rules can be encoded into software systems such as ART, KEE, or Prolog. (3) Machine learning: Training data, such as historical MC ampersand A records, can be fed to a classifier program or neutral net or other machine learning algorithm. The Wisdom ampersand Sense (W ampersand S) software is a combination of approaches 2 and 3. The W ampersand S program includes full features for adding administrative rules and expert judgment rules to the rule base. if desired, the software can enforce consistency among all rules in the rule base

  19. The Rules of the Game—The Rules of the Player

    DEFF Research Database (Denmark)

    Thorhauge, Anne Mette

    2013-01-01

    of the game manager in order to implement the rules and provide a world for the other players. In online role-playing games, a programmed system simulates the rule system as well as part of the game manager’s tasks, while the rest of the activity is up to the players to define. Some aspects may translate more......This article presents a critical view of the concept of rules in game studies on the basis of a case study of role-playing across media. Role-playing in its traditional form is a complex activity including a game system and a number of communicative conventions where one player takes the role...... or less unproblematically across media, others are transformed by the introduction of the programmed system. This reveals some important perspectives on the sort of rules that can be simulated in a programmed system and what this means to the concept of rules in game studies....

  20. How Knowledge Influences Learning.

    Science.gov (United States)

    Siegler, Robert S.

    1983-01-01

    Discusses how children's knowledge can be measured/described, knowledge patterns across diverse concepts, interaction of knowledge/learning, and ways children construct more advanced problem-solving rules to replace less adequate ones. Evidence, drawn from studies on children's acquisition of knowledge about balance beams, suggests that knowledge…

  1. Proposal to modify Rule 6, Rule 10a, and Rule 12c of the International Code of Nomenclature of Prokaryotes.

    Science.gov (United States)

    Oren, Aharon; Garrity, George M; Schink, Bernhard

    2014-04-01

    According to the current versions of Rule 10a and Rule 12c of the International Code of Nomenclature of Prokaryotes, names of a genus or subgenus and specific epithets may be taken from any source and may even be composed in an arbitrary manner. Based on these rules, names may be composed of any word or any combination of elements derived from any language with a Latin ending. We propose modifying these rules by adding the text, currently part of Recommendation 6, according to which words from languages other than Latin or Greek should be avoided as long as equivalents exist in Latin or Greek or can be constructed by combining word elements from these two languages. We also propose modification of Rule 6 by adopting some of the current paragraphs of Recommendation 6 to become part of the Rule.

  2. Online Learning Behaviors for Radiology Interns Based on Association Rules and Clustering Technique

    Science.gov (United States)

    Chen, Hsing-Shun; Liou, Chuen-He

    2014-01-01

    In a hospital, clinical teachers must also care for patients, so there is less time for the teaching of clinical courses, or for discussing clinical cases with interns. However, electronic learning (e-learning) can complement clinical skills education for interns in a blended-learning process. Students discuss and interact with classmates in an…

  3. Involvement of Working Memory in College Students' Sequential Pattern Learning and Performance

    Science.gov (United States)

    Kundey, Shannon M. A.; De Los Reyes, Andres; Rowan, James D.; Lee, Bern; Delise, Justin; Molina, Sabrina; Cogdill, Lindsay

    2013-01-01

    When learning highly organized sequential patterns of information, humans and nonhuman animals learn rules regarding the hierarchical structures of these sequences. In three experiments, we explored the role of working memory in college students' sequential pattern learning and performance in a computerized task involving a sequential…

  4. Application of cross-correlated delay shift rule in spiking neural networks for interictal spike detection.

    Science.gov (United States)

    Lilin Guo; Zhenzhong Wang; Cabrerizo, Mercedes; Adjouadi, Malek

    2016-08-01

    This study proposes a Cross-Correlated Delay Shift (CCDS) supervised learning rule to train neurons with associated spatiotemporal patterns to classify spike patterns. The objective of this study was to evaluate the feasibility of using the CCDS rule to automate the detection of interictal spikes in electroencephalogram (EEG) data on patients with epilepsy. Encoding is the initial yet essential step for spiking neurons to process EEG patterns. A new encoding method is utilized to convert the EEG signal into spike patterns. The simulation results show that the proposed algorithm identified 69 spikes out of 82 spikes, or 84% detection rate, which is quite high considering the subtleties of interictal spikes and the tediousness of monitoring long EEG records. This CCDS rule is also benchmarked by ReSuMe on the same task.

  5. Using data-driven rules to predict mortality in severe community acquired pneumonia.

    Directory of Open Access Journals (Sweden)

    Chuang Wu

    Full Text Available Prediction of patient-centered outcomes in hospitals is useful for performance benchmarking, resource allocation, and guidance regarding active treatment and withdrawal of care. Yet, their use by clinicians is limited by the complexity of available tools and amount of data required. We propose to use Disjunctive Normal Forms as a novel approach to predict hospital and 90-day mortality from instance-based patient data, comprising demographic, genetic, and physiologic information in a large cohort of patients admitted with severe community acquired pneumonia. We develop two algorithms to efficiently learn Disjunctive Normal Forms, which yield easy-to-interpret rules that explicitly map data to the outcome of interest. Disjunctive Normal Forms achieve higher prediction performance quality compared to a set of state-of-the-art machine learning models, and unveils insights unavailable with standard methods. Disjunctive Normal Forms constitute an intuitive set of prediction rules that could be easily implemented to predict outcomes and guide criteria-based clinical decision making and clinical trial execution, and thus of greater practical usefulness than currently available prediction tools. The Java implementation of the tool JavaDNF will be publicly available.

  6. Measuring strategic control in artificial grammar learning.

    Science.gov (United States)

    Norman, Elisabeth; Price, Mark C; Jones, Emma

    2011-12-01

    In response to concerns with existing procedures for measuring strategic control over implicit knowledge in artificial grammar learning (AGL), we introduce a more stringent measurement procedure. After two separate training blocks which each consisted of letter strings derived from a different grammar, participants either judged the grammaticality of novel letter strings with respect to only one of these two grammars (pure-block condition), or had the target grammar varying randomly from trial to trial (novel mixed-block condition) which required a higher degree of conscious flexible control. Random variation in the colour and font of letters was introduced to disguise the nature of the rule and reduce explicit learning. Strategic control was observed both in the pure-block and mixed-block conditions, and even among participants who did not realise the rule was based on letter identity. This indicated detailed strategic control in the absence of explicit learning. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Innovative design with learning reflexiveness for developing the Hamiltonian circuit learning games

    Directory of Open Access Journals (Sweden)

    Meng-Chien Yang

    2018-02-01

    Full Text Available In this study, we use a new proposed framework to develop the Hamiltonian circuit learning games for college students. The framework is for enhancing learners’ activities with learning reflexiveness. The design of these games is based on this framework to achieve the targeted learning outcomes. In recent years, the game-based learning is a very popular research topic. The Hamiltonian circuit is an important concepts for learning many computer science and electric engineering topics, such as IC design routing algorithm. The developed games use guiding rules to enable students to learn the Hamiltonian circuit in complicate graph problem. After the game, the learners are given a reviewing test which using the animation film for explaining the knowledge. This design concept is different from the previous studies. Through this new design, the outcome gets the better learning results under the effect of reflection. The students will have a deeper impression on the subject, and through self-learning and active thinking, in the game will have a deeper experience.

  8. Infants learn better from left to right: a directional bias in infants' sequence learning.

    Science.gov (United States)

    Bulf, Hermann; de Hevia, Maria Dolores; Gariboldi, Valeria; Macchi Cassia, Viola

    2017-05-26

    A wealth of studies show that human adults map ordered information onto a directional spatial continuum. We asked whether mapping ordinal information into a directional space constitutes an early predisposition, already functional prior to the acquisition of symbolic knowledge and language. While it is known that preverbal infants represent numerical order along a left-to-right spatial continuum, no studies have investigated yet whether infants, like adults, organize any kind of ordinal information onto a directional space. We investigated whether 7-month-olds' ability to learn high-order rule-like patterns from visual sequences of geometric shapes was affected by the spatial orientation of the sequences (left-to-right vs. right-to-left). Results showed that infants readily learn rule-like patterns when visual sequences were presented from left to right, but not when presented from right to left. This result provides evidence that spatial orientation critically determines preverbal infants' ability to perceive and learn ordered information in visual sequences, opening to the idea that a left-to-right spatially organized mental representation of ordered dimensions might be rooted in biologically-determined constraints on human brain development.

  9. Maze learning by a hybrid brain-computer system.

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-13

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  10. Maze learning by a hybrid brain-computer system

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-01

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  11. E-learning environment as intelligent tutoring system

    Science.gov (United States)

    Nagyová, Ingrid

    2017-07-01

    The development of computers and artificial intelligence theory allow their application in the field of education. Intelligent tutoring systems reflect student learning styles and adapt the curriculum according to their individual needs. The building of intelligent tutoring systems requires not only the creation of suitable software, but especially the search and application of the rules enabling ICT to individually adapt the curriculum. The main idea of this paper is to attempt to specify the rules for dividing the students to systematically working students and more practically or pragmatically inclined students. The paper shows that monitoring the work of students in e-learning environment, analysis of various approaches to educational materials and correspondence assignments show different results for the defined groups of students.

  12. RuleMaDrone: A Web-Interface to Visualise Space Usage Rules for Drones

    OpenAIRE

    Trippaers, Aäron

    2015-01-01

    RuleMaDrone, an application developed within this thesis, is presented as a solution to communicate the rules and regulations to drone operators. To provide the solution a framework for drone safety was designed which consists of the rules and regulations, the drone properties and the environmental factors. RuleMaDrone is developed with this framework and thus will provide drone operators with an application which they can use to find a safe and legal fly zone. RuleMaDrone u...

  13. Boundedly rational learning and heterogeneous trading strategies with hybrid neuro-fuzzy models

    NARCIS (Netherlands)

    Bekiros, S.D.

    2009-01-01

    The present study deals with heterogeneous learning rules in speculative markets where heuristic strategies reflect the rules-of-thumb of boundedly rational investors. The major challenge for "chartists" is the development of new models that would enhance forecasting ability particularly for time

  14. Strategy as simple rules.

    Science.gov (United States)

    Eisenhardt, K M; Sull, D N

    2001-01-01

    The success of Yahoo!, eBay, Enron, and other companies that have become adept at morphing to meet the demands of changing markets can't be explained using traditional thinking about competitive strategy. These companies have succeeded by pursuing constantly evolving strategies in market spaces that were considered unattractive according to traditional measures. In this article--the third in an HBR series by Kathleen Eisenhardt and Donald Sull on strategy in the new economy--the authors ask, what are the sources of competitive advantage in high-velocity markets? The secret, they say, is strategy as simple rules. The companies know that the greatest opportunities for competitive advantage lie in market confusion, but they recognize the need for a few crucial strategic processes and a few simple rules. In traditional strategy, advantage comes from exploiting resources or stable market positions. In strategy as simple rules, advantage comes from successfully seizing fleeting opportunities. Key strategic processes, such as product innovation, partnering, or spinout creation, place the company where the flow of opportunities is greatest. Simple rules then provide the guidelines within which managers can pursue such opportunities. Simple rules, which grow out of experience, fall into five broad categories: how- to rules, boundary conditions, priority rules, timing rules, and exit rules. Companies with simple-rules strategies must follow the rules religiously and avoid the temptation to change them too frequently. A consistent strategy helps managers sort through opportunities and gain short-term advantage by exploiting the attractive ones. In stable markets, managers rely on complicated strategies built on detailed predictions of the future. But when business is complicated, strategy should be simple.

  15. Two fast and accurate heuristic RBF learning rules for data classification.

    Science.gov (United States)

    Rouhani, Modjtaba; Javan, Dawood S

    2016-03-01

    This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. 49 CFR 222.41 - How does this rule affect Pre-Rule Quiet Zones and Pre-Rule Partial Quiet Zones?

    Science.gov (United States)

    2010-10-01

    ...-Rule Quiet Zone may be established by automatic approval and remain in effect, subject to § 222.51, if... Zone may be established by automatic approval and remain in effect, subject to § 222.51, if the Pre... 49 Transportation 4 2010-10-01 2010-10-01 false How does this rule affect Pre-Rule Quiet Zones and...

  17. Development of fuzzy algorithm with learning function for nuclear steam generator level control

    International Nuclear Information System (INIS)

    Park, Gee Yong; Seong, Poong Hyun

    1993-01-01

    A fuzzy algorithm with learning function is applied to the steam generator level control of nuclear power plant. This algorithm can make its rule base and membership functions suited for steam generator level control by use of the data obtained from the control actions of a skilled operator or of other controllers (i.e., PID controller). The rule base of fuzzy controller with learning function is divided into two parts. One part of the rule base is provided to level control of steam generator at low power level (0 % - 30 % of full power) and the other to level control at high power level (30 % - 100 % of full power). Response time of steam generator level control at low power range with this rule base is shown to be shorter than that of fuzzy controller with direct inference. (Author)

  18. Using the Chain Rule as the Key Link in Deriving the General Rules for Differentiation

    Science.gov (United States)

    Sprows, David

    2011-01-01

    The standard approach to the general rules for differentiation is to first derive the power, product, and quotient rules and then derive the chain rule. In this short article we give an approach to these rules which uses the chain rule as the main tool in deriving the power, product, and quotient rules in a manner which is more student-friendly…

  19. On-line learning in radial basis functions networks

    OpenAIRE

    Freeman, Jason; Saad, David

    1997-01-01

    An analytic investigation of the average case learning and generalization properties of Radial Basis Function Networks (RBFs) is presented, utilising on-line gradient descent as the learning rule. The analytic method employed allows both the calculation of generalization error and the examination of the internal dynamics of the network. The generalization error and internal dynamics are then used to examine the role of the learning rate and the specialization of the hidden units, which gives ...

  20. The chronotron: a neuron that learns to fire temporally precise spike patterns.

    Directory of Open Access Journals (Sweden)

    Răzvan V Florian

    Full Text Available In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons, one that provides high memory capacity (E-learning, and one that has a higher biological plausibility (I-learning. With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.