WorldWideScience

Sample records for self-organizing recurrent neural

  1. SORN: a self-organizing recurrent neural network

    Directory of Open Access Journals (Sweden)

    Andreea Lazar

    2009-10-01

    Full Text Available Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success.

  2. Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning.

    Science.gov (United States)

    Eser, Jürgen; Zheng, Pengsheng; Triesch, Jochen

    2014-01-01

    Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.

  3. Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network.

    Science.gov (United States)

    Del Papa, Bruno; Priesemann, Viola; Triesch, Jochen

    2017-01-01

    Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions - matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model's performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN's spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.

  4. Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.

    Science.gov (United States)

    Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei

    2016-02-01

    A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.

  5. RM-SORN: a reward-modulated self-organizing recurrent neural network.

    Science.gov (United States)

    Aswolinskiy, Witali; Pipa, Gordon

    2015-01-01

    Neural plasticity plays an important role in learning and memory. Reward-modulation of plasticity offers an explanation for the ability of the brain to adapt its neural activity to achieve a rewarded goal. Here, we define a neural network model that learns through the interaction of Intrinsic Plasticity (IP) and reward-modulated Spike-Timing-Dependent Plasticity (STDP). IP enables the network to explore possible output sequences and STDP, modulated by reward, reinforces the creation of the rewarded output sequences. The model is tested on tasks for prediction, recall, non-linear computation, pattern recognition, and sequence generation. It achieves performance comparable to networks trained with supervised learning, while using simple, biologically motivated plasticity rules, and rewarding strategies. The results confirm the importance of investigating the interaction of several plasticity rules in the context of reward-modulated learning and whether reward-modulated self-organization can explain the amazing capabilities of the brain.

  6. Lifelong learning of human actions with deep neural network self-organization.

    Science.gov (United States)

    Parisi, German I; Tani, Jun; Weber, Cornelius; Wermter, Stefan

    2017-12-01

    Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Self-organized topology of recurrence-based complex networks

    International Nuclear Information System (INIS)

    Yang, Hui; Liu, Gang

    2013-01-01

    With the rapid technological advancement, network is almost everywhere in our daily life. Network theory leads to a new way to investigate the dynamics of complex systems. As a result, many methods are proposed to construct a network from nonlinear time series, including the partition of state space, visibility graph, nearest neighbors, and recurrence approaches. However, most previous works focus on deriving the adjacency matrix to represent the complex network and extract new network-theoretic measures. Although the adjacency matrix provides connectivity information of nodes and edges, the network geometry can take variable forms. The research objective of this article is to develop a self-organizing approach to derive the steady geometric structure of a network from the adjacency matrix. We simulate the recurrence network as a physical system by treating the edges as springs and the nodes as electrically charged particles. Then, force-directed algorithms are developed to automatically organize the network geometry by minimizing the system energy. Further, a set of experiments were designed to investigate important factors (i.e., dynamical systems, network construction methods, force-model parameter, nonhomogeneous distribution) affecting this self-organizing process. Interestingly, experimental results show that the self-organized geometry recovers the attractor of a dynamical system that produced the adjacency matrix. This research addresses a question, i.e., “what is the self-organizing geometry of a recurrence network?” and provides a new way to reproduce the attractor or time series from the recurrence plot. As a result, novel network-theoretic measures (e.g., average path length and proximity ratio) can be achieved based on actual node-to-node distances in the self-organized network topology. The paper brings the physical models into the recurrence analysis and discloses the spatial geometry of recurrence networks

  8. Identification and prediction of dynamic systems using an interactively recurrent self-evolving fuzzy neural network.

    Science.gov (United States)

    Lin, Yang-Yin; Chang, Jyh-Yeong; Lin, Chin-Teng

    2013-02-01

    This paper presents a novel recurrent fuzzy neural network, called an interactively recurrent self-evolving fuzzy neural network (IRSFNN), for prediction and identification of dynamic systems. The recurrent structure in an IRSFNN is formed as an external loops and internal feedback by feeding the rule firing strength of each rule to others rules and itself. The consequent part in the IRSFNN is composed of a Takagi-Sugeno-Kang (TSK) or functional-link-based type. The proposed IRSFNN employs a functional link neural network (FLNN) to the consequent part of fuzzy rules for promoting the mapping ability. Unlike a TSK-type fuzzy neural network, the FLNN in the consequent part is a nonlinear function of input variables. An IRSFNNs learning starts with an empty rule base and all of the rules are generated and learned online through a simultaneous structure and parameter learning. An on-line clustering algorithm is effective in generating fuzzy rules. The consequent update parameters are derived by a variable-dimensional Kalman filter algorithm. The premise and recurrent parameters are learned through a gradient descent algorithm. We test the IRSFNN for the prediction and identification of dynamic plants and compare it to other well-known recurrent FNNs. The proposed model obtains enhanced performance results.

  9. Indirect adaptive fuzzy wavelet neural network with self- recurrent consequent part for AC servo system.

    Science.gov (United States)

    Hou, Runmin; Wang, Li; Gao, Qiang; Hou, Yuanglong; Wang, Chao

    2017-09-01

    This paper proposes a novel indirect adaptive fuzzy wavelet neural network (IAFWNN) to control the nonlinearity, wide variations in loads, time-variation and uncertain disturbance of the ac servo system. In the proposed approach, the self-recurrent wavelet neural network (SRWNN) is employed to construct an adaptive self-recurrent consequent part for each fuzzy rule of TSK fuzzy model. For the IAFWNN controller, the online learning algorithm is based on back propagation (BP) algorithm. Moreover, an improved particle swarm optimization (IPSO) is used to adapt the learning rate. The aid of an adaptive SRWNN identifier offers the real-time gradient information to the adaptive fuzzy wavelet neural controller to overcome the impact of parameter variations, load disturbances and other uncertainties effectively, and has a good dynamic. The asymptotical stability of the system is guaranteed by using the Lyapunov method. The result of the simulation and the prototype test prove that the proposed are effective and suitable. Copyright © 2017. Published by Elsevier Ltd.

  10. Self-organized critical neural networks

    International Nuclear Information System (INIS)

    Bornholdt, Stefan; Roehl, Torsten

    2003-01-01

    A mechanism for self-organization of the degree of connectivity in model neural networks is studied. Network connectivity is regulated locally on the basis of an order parameter of the global dynamics, which is estimated from an observable at the single synapse level. This principle is studied in a two-dimensional neural network with randomly wired asymmetric weights. In this class of networks, network connectivity is closely related to a phase transition between ordered and disordered dynamics. A slow topology change is imposed on the network through a local rewiring rule motivated by activity-dependent synaptic development: Neighbor neurons whose activity is correlated, on average develop a new connection while uncorrelated neighbors tend to disconnect. As a result, robust self-organization of the network towards the order disorder transition occurs. Convergence is independent of initial conditions, robust against thermal noise, and does not require fine tuning of parameters

  11. Pattern classification and recognition of invertebrate functional groups using self-organizing neural networks.

    Science.gov (United States)

    Zhang, WenJun

    2007-07-01

    Self-organizing neural networks can be used to mimic non-linear systems. The main objective of this study is to make pattern classification and recognition on sampling information using two self-organizing neural network models. Invertebrate functional groups sampled in the irrigated rice field were classified and recognized using one-dimensional self-organizing map and self-organizing competitive learning neural networks. Comparisons between neural network models, distance (similarity) measures, and number of neurons were conducted. The results showed that self-organizing map and self-organizing competitive learning neural network models were effective in pattern classification and recognition of sampling information. Overall the performance of one-dimensional self-organizing map neural network was better than self-organizing competitive learning neural network. The number of neurons could determine the number of classes in the classification. Different neural network models with various distance (similarity) measures yielded similar classifications. Some differences, dependent upon the specific network structure, would be found. The pattern of an unrecognized functional group was recognized with the self-organizing neural network. A relative consistent classification indicated that the following invertebrate functional groups, terrestrial blood sucker; terrestrial flyer; tourist (nonpredatory species with no known functional role other than as prey in ecosystem); gall former; collector (gather, deposit feeder); predator and parasitoid; leaf miner; idiobiont (acarine ectoparasitoid), were classified into the same group, and the following invertebrate functional groups, external plant feeder; terrestrial crawler, walker, jumper or hunter; neustonic (water surface) swimmer (semi-aquatic), were classified into another group. It was concluded that reliable conclusions could be drawn from comparisons of different neural network models that use different distance

  12. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  14. Model for a flexible motor memory based on a self-active recurrent neural network.

    Science.gov (United States)

    Boström, Kim Joris; Wagner, Heiko; Prieske, Markus; de Lussanet, Marc

    2013-10-01

    Using recent recurrent network architecture based on the reservoir computing approach, we propose and numerically simulate a model that is focused on the aspects of a flexible motor memory for the storage of elementary movement patterns into the synaptic weights of a neural network, so that the patterns can be retrieved at any time by simple static commands. The resulting motor memory is flexible in that it is capable to continuously modulate the stored patterns. The modulation consists in an approximately linear inter- and extrapolation, generating a large space of possible movements that have not been learned before. A recurrent network of thousand neurons is trained in a manner that corresponds to a realistic exercising scenario, with experimentally measured muscular activations and with kinetic data representing proprioceptive feedback. The network is "self-active" in that it maintains recurrent flow of activation even in the absence of input, a feature that resembles the "resting-state activity" found in the human and animal brain. The model involves the concept of "neural outsourcing" which amounts to the permanent shifting of computational load from higher to lower-level neural structures, which might help to explain why humans are able to execute learned skills in a fluent and flexible manner without the need for attention to the details of the movement. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.

    Science.gov (United States)

    Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng

    2016-02-01

    This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.

  16. Self-Organizing Neural Circuits for Sensory-Guided Motor Control

    National Research Council Canada - National Science Library

    Grossberg, Stephen

    1999-01-01

    The reported projects developed mathematical models to explain how self-organizing neural circuits that operate under continuous or intermittent sensory guidance achieve flexible and accurate control of human movement...

  17. Application of self-organizing competition artificial neural network to logging data explanation of sandstone-hosted uranium deposits

    International Nuclear Information System (INIS)

    Xu Jianguo; Xu Xianli; Wang Weiguo

    2008-01-01

    The article describes the model construction of self-organizing competition artificial neural network, its principle and automatic recognition process of borehole lithology in detail, and then proves the efficiency of the neural network model for automatically recognizing the borehole lithology with some cases. The self-organizing competition artificial neural network has the ability of self- organization, self-adjustment and high permitting errors. Compared with the BP algorithm, it takes less calculation quantity and more rapidly converges. Furthermore, it can automatically confirm the category without the known sample information. Trial results based on contrasting the identification results of the borehole lithology with geological documentations, indicate that self-organizing artificial neural network can be well applied to automatically performing the category of borehole lithology, during the logging data explanation of sandstone-hosted uranium deposits. (authors)

  18. Self: an adaptive pressure arising from self-organization, chaotic dynamics, and neural Darwinism.

    Science.gov (United States)

    Bruzzo, Angela Alessia; Vimal, Ram Lakhan Pandey

    2007-12-01

    In this article, we establish a model to delineate the emergence of "self" in the brain making recourse to the theory of chaos. Self is considered as the subjective experience of a subject. As essential ingredients of subjective experiences, our model includes wakefulness, re-entry, attention, memory, and proto-experiences. The stability as stated by chaos theory can potentially describe the non-linear function of "self" as sensitive to initial conditions and can characterize it as underlying order from apparently random signals. Self-similarity is discussed as a latent menace of a pathological confusion between "self" and "others". Our test hypothesis is that (1) consciousness might have emerged and evolved from a primordial potential or proto-experience in matter, such as the physical attractions and repulsions experienced by electrons, and (2) "self" arises from chaotic dynamics, self-organization and selective mechanisms during ontogenesis, while emerging post-ontogenically as an adaptive pressure driven by both volume and synaptic-neural transmission and influencing the functional connectivity of neural nets (structure).

  19. Adaptive complementary fuzzy self-recurrent wavelet neural network controller for the electric load simulator system

    Directory of Open Access Journals (Sweden)

    Wang Chao

    2016-03-01

    Full Text Available Due to the complexities existing in the electric load simulator, this article develops a high-performance nonlinear adaptive controller to improve the torque tracking performance of the electric load simulator, which mainly consists of an adaptive fuzzy self-recurrent wavelet neural network controller with variable structure (VSFSWC and a complementary controller. The VSFSWC is clearly and easily used for real-time systems and greatly improves the convergence rate and control precision. The complementary controller is designed to eliminate the effect of the approximation error between the proposed neural network controller and the ideal feedback controller without chattering phenomena. Moreover, adaptive learning laws are derived to guarantee the system stability in the sense of the Lyapunov theory. Finally, the hardware-in-the-loop simulations are carried out to verify the feasibility and effectiveness of the proposed algorithms in different working styles.

  20. Image Fusion Based on the Self-Organizing Feature Map Neural Networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhaoli; SUN Shenghe

    2001-01-01

    This paper presents a new image datafusion scheme based on the self-organizing featuremap (SOFM) neural networks.The scheme consists ofthree steps:(1) pre-processing of the images,whereweighted median filtering removes part of the noisecomponents corrupting the image,(2) pixel clusteringfor each image using two-dimensional self-organizingfeature map neural networks,and (3) fusion of the im-ages obtained in Step (2) utilizing fuzzy logic,whichsuppresses the residual noise components and thusfurther improves the image quality.It proves thatsuch a three-step combination offers an impressive ef-fectiveness and performance improvement,which isconfirmed by simulations involving three image sen-sors (each of which has a different noise structure).

  1. Morphological self-organizing feature map neural network with applications to automatic target recognition

    Science.gov (United States)

    Zhang, Shijun; Jing, Zhongliang; Li, Jianxun

    2005-01-01

    The rotation invariant feature of the target is obtained using the multi-direction feature extraction property of the steerable filter. Combining the morphological operation top-hat transform with the self-organizing feature map neural network, the adaptive topological region is selected. Using the erosion operation, the topological region shrinkage is achieved. The steerable filter based morphological self-organizing feature map neural network is applied to automatic target recognition of binary standard patterns and real-world infrared sequence images. Compared with Hamming network and morphological shared-weight networks respectively, the higher recognition correct rate, robust adaptability, quick training, and better generalization of the proposed method are achieved.

  2. Collaborative Recurrent Neural Networks forDynamic Recommender Systems

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:366–381, 2016 ACML 2016 Collaborative Recurrent Neural Networks for Dynamic Recommender Systems Young...an unprece- dented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating...Recurrent Neural Network, Recommender System , Neural Language Model, Collaborative Filtering 1. Introduction As ever larger parts of the population

  3. Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations

    Directory of Open Access Journals (Sweden)

    Sheng-Jun Wang

    2011-06-01

    Full Text Available Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. They are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality. We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. It was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We find that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and self-organized criticality, which are not present in the respective random networks. The underlying mechanism is that each dense module cannot sustain activity on its own, but displays self-organized criticality in the presence of weak perturbations. The hierarchical modular networks provide the coupling among subsystems with self-organized criticality. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivityof critical state and predictability and timing of oscillations for efficient

  4. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    Science.gov (United States)

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Character recognition from trajectory by recurrent spiking neural networks.

    Science.gov (United States)

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  6. Region stability analysis and tracking control of memristive recurrent neural network.

    Science.gov (United States)

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Hysteretic recurrent neural networks: a tool for modeling hysteretic materials and systems

    International Nuclear Information System (INIS)

    Veeramani, Arun S; Crews, John H; Buckner, Gregory D

    2009-01-01

    This paper introduces a novel recurrent neural network, the hysteretic recurrent neural network (HRNN), that is ideally suited to modeling hysteretic materials and systems. This network incorporates a hysteretic neuron consisting of conjoined sigmoid activation functions. Although similar hysteretic neurons have been explored previously, the HRNN is unique in its utilization of simple recurrence to 'self-select' relevant activation functions. Furthermore, training is facilitated by placing the network weights on the output side, allowing standard backpropagation of error training algorithms to be used. We present two- and three-phase versions of the HRNN for modeling hysteretic materials with distinct phases. These models are experimentally validated using data collected from shape memory alloys and ferromagnetic materials. The results demonstrate the HRNN's ability to accurately generalize hysteretic behavior with a relatively small number of neurons. Additional benefits lie in the network's ability to identify statistical information concerning the macroscopic material by analyzing the weights of the individual neurons

  8. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  9. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  10. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  11. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  12. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  13. Invertebrate diversity classification using self-organizing map neural network: with some special topological functions

    Directory of Open Access Journals (Sweden)

    WenJun Zhang

    2014-06-01

    Full Text Available In present study we used self-organizing map (SOM neural network to conduct the non-supervisory clustering of invertebrate orders in rice field. Four topological functions, i.e., cossintopf, sincostopf, acossintopf, and expsintopf, established on the template in toolbox of Matlab, were used in SOM neural network learning. Results showed that clusters were different when using different topological functions because different topological functions will generate different spatial structure of neurons in neural network. We may chose these functions and results based on comparison with the practical situation.

  14. Identification-based chaos control via backstepping design using self-organizing fuzzy neural networks

    International Nuclear Information System (INIS)

    Peng Yafu; Hsu, C.-F.

    2009-01-01

    This paper proposes an identification-based adaptive backstepping control (IABC) for the chaotic systems. The IABC system is comprised of a neural backstepping controller and a robust compensation controller. The neural backstepping controller containing a self-organizing fuzzy neural network (SOFNN) identifier is the principal controller, and the robust compensation controller is designed to dispel the effect of minimum approximation error introduced by the SOFNN identifier. The SOFNN identifier is used to online estimate the chaotic dynamic function with structure and parameter learning phases of fuzzy neural network. The structure learning phase consists of the growing and pruning of fuzzy rules; thus the SOFNN identifier can avoid the time-consuming trial-and-error tuning procedure for determining the neural structure of fuzzy neural network. The parameter learning phase adjusts the interconnection weights of neural network to achieve favorable approximation performance. Finally, simulation results verify that the proposed IABC can achieve favorable tracking performance.

  15. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used. In this study, we evaluated the performance of these neural networks on three established bench mark time series prediction problems. Results from the experiments showed that Jordan neural network performed significantly ...

  16. Predicting recurrent aphthous ulceration using genetic algorithms-optimized neural networks

    Directory of Open Access Journals (Sweden)

    Najla S Dar-Odeh

    2010-05-01

    Full Text Available Najla S Dar-Odeh1, Othman M Alsmadi2, Faris Bakri3, Zaer Abu-Hammour2, Asem A Shehabi3, Mahmoud K Al-Omiri1, Shatha M K Abu-Hammad4, Hamzeh Al-Mashni4, Mohammad B Saeed4, Wael Muqbil4, Osama A Abu-Hammad1 1Faculty of Dentistry, 2Faculty of Engineering and Technology, 3Faculty of Medicine, University of Jordan, Amman, Jordan; 4Dental Department, University of Jordan Hospital, Amman, JordanObjective: To construct and optimize a neural network that is capable of predicting the occurrence of recurrent aphthous ulceration (RAU based on a set of appropriate input data.Participants and methods: Artificial neural networks (ANN software employing genetic algorithms to optimize the architecture neural networks was used. Input and output data of 86 participants (predisposing factors and status of the participants with regards to recurrent aphthous ulceration were used to construct and train the neural networks. The optimized neural networks were then tested using untrained data of a further 10 participants.Results: The optimized neural network, which produced the most accurate predictions for the presence or absence of recurrent aphthous ulceration was found to employ: gender, hematological (with or without ferritin and mycological data of the participants, frequency of tooth brushing, and consumption of vegetables and fruits.Conclusions: Factors appearing to be related to recurrent aphthous ulceration and appropriate for use as input data to construct ANNs that predict recurrent aphthous ulceration were found to include the following: gender, hemoglobin, serum vitamin B12, serum ferritin, red cell folate, salivary candidal colony count, frequency of tooth brushing, and the number of fruits or vegetables consumed daily.Keywords: artifical neural networks, recurrent, aphthous ulceration, ulcer

  17. Diagonal recurrent neural network based adaptive control of nonlinear dynamical systems using lyapunov stability criterion.

    Science.gov (United States)

    Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P

    2017-03-01

    In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Solving differential equations with unknown constitutive relations as recurrent neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.; Tartakovsky, Alexandre M.

    2017-12-08

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learning literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.

  19. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

    OpenAIRE

    Zhang, Zewang; Sun, Zheng; Liu, Jiaqi; Chen, Jingwen; Huo, Zhao; Zhang, Xiao

    2016-01-01

    A deep learning approach has been widely applied in sequence modeling problems. In terms of automatic speech recognition (ASR), its performance has significantly been improved by increasing large speech corpus and deeper neural network. Especially, recurrent neural network and deep convolutional neural network have been applied in ASR successfully. Given the arising problem of training speed, we build a novel deep recurrent convolutional network for acoustic modeling and then apply deep resid...

  20. Sensitivity analysis of linear programming problem through a recurrent neural network

    Science.gov (United States)

    Das, Raja

    2017-11-01

    In this paper we study the recurrent neural network for solving linear programming problems. To achieve optimality in accuracy and also in computational effort, an algorithm is presented. We investigate the sensitivity analysis of linear programming problem through the neural network. A detailed example is also presented to demonstrate the performance of the recurrent neural network.

  1. Effects of Some Neurobiological Factors in a Self-organized Critical Model Based on Neural Networks

    International Nuclear Information System (INIS)

    Zhou Liming; Zhang Yingyue; Chen Tianlun

    2005-01-01

    Based on an integrate-and-fire mechanism, we investigate the effect of changing the efficacy of the synapse, the transmitting time-delayed, and the relative refractoryperiod on the self-organized criticality in our neural network model.

  2. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  3. Global dissipativity of continuous-time recurrent neural networks with time delay

    International Nuclear Information System (INIS)

    Liao Xiaoxin; Wang Jun

    2003-01-01

    This paper addresses the global dissipativity of a general class of continuous-time recurrent neural networks. First, the concepts of global dissipation and global exponential dissipation are defined and elaborated. Next, the sets of global dissipativity and global exponentially dissipativity are characterized using the parameters of recurrent neural network models. In particular, it is shown that the Hopfield network and cellular neural networks with or without time delays are dissipative systems

  4. Recurrent Neural Network For Forecasting Time Series With Long Memory Pattern

    Science.gov (United States)

    Walid; Alamsyah

    2017-04-01

    Recurrent Neural Network as one of the hybrid models are often used to predict and estimate the issues related to electricity, can be used to describe the cause of the swelling of electrical load which experienced by PLN. In this research will be developed RNN forecasting procedures at the time series with long memory patterns. Considering the application is the national electrical load which of course has a different trend with the condition of the electrical load in any country. This research produces the algorithm of time series forecasting which has long memory pattern using E-RNN after this referred to the algorithm of integrated fractional recurrent neural networks (FIRNN).The prediction results of long memory time series using models Fractional Integrated Recurrent Neural Network (FIRNN) showed that the model with the selection of data difference in the range of [-1,1] and the model of Fractional Integrated Recurrent Neural Network (FIRNN) (24,6,1) provides the smallest MSE value, which is 0.00149684.

  5. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  6. A novel recurrent neural network with finite-time convergence for linear programming.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  7. Application of a Self-recurrent Wavelet Neural Network in the Modeling and Control of an AC Servo System

    Directory of Open Access Journals (Sweden)

    Run Min HOU

    2014-05-01

    Full Text Available To control the nonlinearity, widespread variations in loads and time varying characteristic of the high power ac servo system, the modeling and control techniques are studied here. A self-recurrent wavelet neural network (SRWNN modeling scheme is proposed, which successfully addresses the issue of the traditional wavelet neural network easily falling into local optimum, and significantly improves the network approximation capability and convergence rate. The control scheme of a SRWNN based on fuzzy compensation is expected. Gradient information is provided in real time for the controller by using a SRWNN identifier, so as to ensure that the learning and adjusting function of the controller of the SRWNN operate well, and fuzzy compensation control is applied to improve rapidity and accuracy of the entire system. Then the Lyapunov function is utilized to judge the stability of the system. The experimental analysis and comparisons with other modeling and control methods, it is clearly shown that the validities of the proposed modeling scheme and control scheme are effective.

  8. Self-organized neural network for the quality control of 12-lead ECG signals

    International Nuclear Information System (INIS)

    Chen, Yun; Yang, Hui

    2012-01-01

    Telemedicine is very important for the timely delivery of health care to cardiovascular patients, especially those who live in the rural areas of developing countries. However, there are a number of uncertainty factors inherent to the mobile-phone-based recording of electrocardiogram (ECG) signals such as personnel with minimal training and other extraneous noises. PhysioNet organized a challenge in 2011 to develop efficient algorithms that can assess the ECG signal quality in telemedicine settings. This paper presents our efforts in this challenge to integrate multiscale recurrence analysis with a self-organizing map for controlling the ECG signal quality. As opposed to directly evaluating the 12-lead ECG, we utilize an information-preserving transform, i.e. Dower transform, to derive the 3-lead vectorcardiogram (VCG) from the 12-lead ECG in the first place. Secondly, we delineate the nonlinear and nonstationary characteristics underlying the 3-lead VCG signals into multiple time-frequency scales. Furthermore, a self-organizing map is trained, in both supervised and unsupervised ways, to identify the correlations between signal quality and multiscale recurrence features. The efficacy and robustness of this approach are validated using real-world ECG recordings available from PhysioNet. The average performance was demonstrated to be 95.25% for the training dataset and 90.0% for the independent test dataset with unknown labels. (paper)

  9. Noise-enhanced categorization in a recurrently reconnected neural network

    International Nuclear Information System (INIS)

    Monterola, Christopher; Zapotocky, Martin

    2005-01-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails

  10. Noise-enhanced categorization in a recurrently reconnected neural network

    Science.gov (United States)

    Monterola, Christopher; Zapotocky, Martin

    2005-03-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails.

  11. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  12. Development of objective flow regime identification method using self-organizing neural network

    International Nuclear Information System (INIS)

    Lee, Jae Young; Kim, Nam Seok; Kwak, Nam Yee

    2004-01-01

    Two-phase flow shows various flow patterns according to the amount of the void and its relative velocity to the liquid flow. This variation directly affect the interfacial transfer which is the key factor for the design or analysis of the phase change systems. Especially the safety analysis of the nuclear power plant has been performed based on the numerical code furnished with the proper constitutive relations depending highly upon the flow regimes. Heavy efforts have been focused to identify the flow regime and at this moment we stand on relative very stable engineering background compare to the other research field. However, the issues related to objectiveness and transient flow regime are still open to study. Lee et al. and Ishii developed the method for the objective and instantaneous flow regime identification based on the neural network and new index of probability distribution of the flow regime which allows just one second observation for the flow regime identification. In the present paper, we developed the self-organized neural network for more objective approach to this problem. Kohonen's Self-Organizing Map (SOM) has been used for clustering, visualization, and abstraction. The SOM is trained through unsupervised competitive learning using a 'winner takes it all' policy. Therefore, its unsupervised training character delete the possible interference of the regime developer to the neural network training. After developing the computer code, we evaluate the performance of the code with the vertically upward two-phase flow in the pipes of 25.4 and 50.4 cmm I.D. Also, the sensitivity of the number of the clusters to the flow regime identification was made

  13. Self-organizing neural networks for automatic detection and classification of contrast-enhancing lesions in dynamic MR-mammography

    International Nuclear Information System (INIS)

    Vomweg, T.W.; Teifke, A.; Kauczor, H.U.; Achenbach, T.; Rieker, O.; Schreiber, W.G.; Heitmann, K.R.; Beier, T.; Thelen, M.

    2005-01-01

    Purpose: Investigation and statistical evaluation of 'Self-Organizing Maps', a special type of neural networks in the field of artificial intelligence, classifying contrast enhancing lesions in dynamic MR-mammography. Material and Methods: 176 investigations with proven histology after core biopsy or operation were randomly divided into two groups. Several Self-Organizing Maps were trained by investigations of the first group to detect and classify contrast enhancing lesions in dynamic MR-mammography. Each single pixel's signal/time curve of all patients within the second group was analyzed by the Self-Organizing Maps. The likelihood of malignancy was visualized by color overlays on the MR-images. At last assessment of contrast-enhancing lesions by each different network was rated visually and evaluated statistically. Results: A well balanced neural network achieved a sensitivity of 90.5% and a specificity of 72.2% in predicting malignancy of 88 enhancing lesions. Detailed analysis of false-positive results revealed that every second fibroadenoma showed a 'typical malignant' signal/time curve without any chance to differentiate between fibroadenomas and malignant tissue regarding contrast enhancement alone; but this special group of lesions was represented by a well-defined area of the Self-Organizing Map. Discussion: Self-Organizing Maps are capable of classifying a dynamic signal/time curve as 'typical benign' or 'typical malignant'. Therefore, they can be used as second opinion. In view of the now known localization of fibroadenomas enhancing like malignant tumors at the Self-Organizing Map, these lesions could be passed to further analysis by additional post-processing elements (e.g., based on T2-weighted series or morphology analysis) in the future. (orig.)

  14. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  15. Sustained Activity in Hierarchical Modular Neural Networks: Self-Organized Criticality and Oscillations

    Science.gov (United States)

    Wang, Sheng-Jun; Hilgetag, Claus C.; Zhou, Changsong

    2010-01-01

    Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. In particular, they are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality (SOC). We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. Previously, it was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We found that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and SOC, which are not present in the respective random networks. The mechanism underlying the sustained activity is that each dense module cannot sustain activity on its own, but displays SOC in the presence of weak perturbations. Therefore, the hierarchical modular networks provide the coupling among subsystems with SOC. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivity of critical states and the predictability and timing of oscillations for efficient information

  16. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  17. Deep Gate Recurrent Neural Network

    Science.gov (United States)

    2016-11-22

    and Fred Cummins. Learning to forget: Continual prediction with lstm . Neural computation, 12(10):2451–2471, 2000. Alex Graves. Generating sequences...DSGU) and Simple Gated Unit (SGU), which are structures for learning long-term dependencies. Compared to traditional Long Short-Term Memory ( LSTM ) and...Gated Recurrent Unit (GRU), both structures require fewer parameters and less computation time in sequence classification tasks. Unlike GRU and LSTM

  18. Parameter estimation in space systems using recurrent neural networks

    Science.gov (United States)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  19. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Recursive Bayesian recurrent neural networks for time-series modeling.

    Science.gov (United States)

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  1. Analysis of surface ozone using a recurrent neural network.

    Science.gov (United States)

    Biancofiore, Fabio; Verdecchia, Marco; Di Carlo, Piero; Tomassetti, Barbara; Aruffo, Eleonora; Busilacchio, Marcella; Bianco, Sebastiano; Di Tommaso, Sinibaldo; Colangeli, Carlo

    2015-05-01

    Hourly concentrations of ozone (O₃) and nitrogen dioxide (NO₂) have been measured for 16 years, from 1998 to 2013, in a seaside town in central Italy. The seasonal trends of O₃ and NO₂ recorded in this period have been studied. Furthermore, we used the data collected during one year (2005), to define the characteristics of a multiple linear regression model and a neural network model. Both models are used to model the hourly O₃ concentration, using, two scenarios: 1) in the first as inputs, only meteorological parameters and 2) in the second adding photochemical parameters at those of the first scenario. In order to evaluate the performance of the model four statistical criteria are used: correlation coefficient, fractional bias, normalized mean squared error and a factor of two. All the criteria show that the neural network gives better results, compared to the regression model, in all the model scenarios. Predictions of O₃ have been carried out by many authors using a feed forward neural architecture. In this paper we show that a recurrent architecture significantly improves the performances of neural predictors. Using only the meteorological parameters as input, the recurrent architecture shows performance better than the multiple linear regression model that uses meteorological and photochemical data as input, making the neural network model with recurrent architecture a more useful tool in areas where only weather measurements are available. Finally, we used the neural network model to forecast the O₃ hourly concentrations 1, 3, 6, 12, 24 and 48 h ahead. The performances of the model in predicting O₃ levels are discussed. Emphasis is given to the possibility of using the neural network model in operational ways in areas where only meteorological data are available, in order to predict O₃ also in sites where it has not been measured yet. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Self-Organizing Maps Neural Networks Applied to the Classification of Ethanol Samples According to the Region of Commercialization

    Directory of Open Access Journals (Sweden)

    Aline Regina Walkoff

    2017-10-01

    Full Text Available Physical-chemical analysis data were collected, from 998 ethanol samples of automotive ethanol commercialized in the northern, midwestern and eastern regions of the state of Paraná. The data presented self-organizing maps (SOM neural networks, which classified them according to those regions. The self-organizing maps best configuration had a 45 x 45 topology and 5000 training epochs, with a final learning rate of 6.7x10-4, a final neighborhood relationship of 3x10-2 and a mean quantization error of 2x10-2. This neural network provided a topological map depicting three separated groups, each one corresponding to samples of a same region of commercialization. Four maps of weights, one for each parameter, were presented. The network established the pH was the most important variable for classification and electrical conductivity the least one. The self-organizing maps application allowed the segmentation of alcohol samples, therefore identifying them according to the region of commercialization. DOI: http://dx.doi.org/10.17807/orbital.v9i4.982

  3. Multistability and instability analysis of recurrent neural networks with time-varying delays.

    Science.gov (United States)

    Zhang, Fanghai; Zeng, Zhigang

    2018-01-01

    This paper provides new theoretical results on the multistability and instability analysis of recurrent neural networks with time-varying delays. It is shown that such n-neuronal recurrent neural networks have exactly [Formula: see text] equilibria, [Formula: see text] of which are locally exponentially stable and the others are unstable, where k 0 is a nonnegative integer such that k 0 ≤n. By using the combination method of two different divisions, recurrent neural networks can possess more dynamic properties. This method improves and extends the existing results in the literature. Finally, one numerical example is provided to show the superiority and effectiveness of the presented results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  5. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations.

    Science.gov (United States)

    Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke

    2018-02-01

    In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Convolutional over Recurrent Encoder for Neural Machine Translation

    Directory of Open Access Journals (Sweden)

    Dakwale Praveen

    2017-06-01

    Full Text Available Neural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN. In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.

  7. Transformation-invariant visual representations in self-organizing spiking neural networks.

    Science.gov (United States)

    Evans, Benjamin D; Stringer, Simon M

    2012-01-01

    The ventral visual pathway achieves object and face recognition by building transformation-invariant representations from elementary visual features. In previous computer simulation studies with rate-coded neural networks, the development of transformation-invariant representations has been demonstrated using either of two biologically plausible learning mechanisms, Trace learning and Continuous Transformation (CT) learning. However, it has not previously been investigated how transformation-invariant representations may be learned in a more biologically accurate spiking neural network. A key issue is how the synaptic connection strengths in such a spiking network might self-organize through Spike-Time Dependent Plasticity (STDP) where the change in synaptic strength is dependent on the relative times of the spikes emitted by the presynaptic and postsynaptic neurons rather than simply correlated activity driving changes in synaptic efficacy. Here we present simulations with conductance-based integrate-and-fire (IF) neurons using a STDP learning rule to address these gaps in our understanding. It is demonstrated that with the appropriate selection of model parameters and training regime, the spiking network model can utilize either Trace-like or CT-like learning mechanisms to achieve transform-invariant representations.

  8. Transform-invariant visual representations in self-organizing spiking neural networks

    Directory of Open Access Journals (Sweden)

    Benjamin eEvans

    2012-07-01

    Full Text Available The ventral visual pathway achieves object and face recognition by building transform-invariant representations from elementary visual features. In previous computer simulation studies with rate-coded neural networks, the development of transform invariant representations has been demonstrated using either of two biologically plausible learning mechanisms, Trace learning and Continuous Transformation (CT learning. However, it has not previously been investigated how transform invariant representations may be learned in a more biologically accurate spiking neural network. A key issue is how the synaptic connection strengths in such a spiking network might self-organize through Spike-Time Dependent Plasticity (STDP where the change in synaptic strength is dependent on the relative times of the spikes emitted by the pre- and postsynaptic neurons rather than simply correlated activity driving changes in synaptic efficacy. Here we present simulations with conductance-based integrate-and-fire (IF neurons using a STDP learning rule to address these gaps in our understanding. It is demonstrated that with the appropriate selection of model pa- rameters and training regime, the spiking network model can utilize either Trace-like or CT-like learning mechanisms to achieve transform-invariant representations.

  9. Iterative free-energy optimization for recurrent neural networks (INFERNO)

    Science.gov (United States)

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes’ synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle. PMID:28282439

  10. Representation of linguistic form and function in recurrent neural networks

    NARCIS (Netherlands)

    Kadar, Akos; Chrupala, Grzegorz; Alishahi, Afra

    2017-01-01

    We present novel methods for analyzing the activation patterns of recurrent neural networks from a linguistic point of view and explore the types of linguistic structure they learn. As a case study, we use a standard standalone language model, and a multi-task gated recurrent network architecture

  11. From Imitation to Prediction, Data Compression vs Recurrent Neural Networks for Natural Language Processing

    Directory of Open Access Journals (Sweden)

    Juan Andres Laura

    2018-03-01

    Full Text Available In recent studies Recurrent Neural Networks were used for generative processes and their surprising performance can be explained by their ability to create good predictions. In addition, Data Compression is also based on prediction. What the problem comes down to is whether a data compressor could be used to perform as well as recurrent neural networks in the natural language processing tasks of sentiment analysis and automatic text generation. If this is possible, then the problem comes down to determining if a compression algorithm is even more intelligent than a neural network in such tasks. In our journey, a fundamental difference between a Data Compression Algorithm and Recurrent Neural Networks has been discovered.

  12. Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks

    Science.gov (United States)

    Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.

    2015-01-01

    The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies

  13. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.

    Science.gov (United States)

    Goudar, Vishwa; Buonomano, Dean V

    2018-03-14

    Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.

  14. Ads' click-through rates predicting based on gated recurrent unit neural networks

    Science.gov (United States)

    Chen, Qiaohong; Guo, Zixuan; Dong, Wen; Jin, Lingzi

    2018-05-01

    In order to improve the effect of online advertising and to increase the revenue of advertising, the gated recurrent unit neural networks(GRU) model is used as the ads' click through rates(CTR) predicting. Combined with the characteristics of gated unit structure and the unique of time sequence in data, using BPTT algorithm to train the model. Furthermore, by optimizing the step length algorithm of the gated unit recurrent neural networks, making the model reach optimal point better and faster in less iterative rounds. The experiment results show that the model based on the gated recurrent unit neural networks and its optimization of step length algorithm has the better effect on the ads' CTR predicting, which helps advertisers, media and audience achieve a win-win and mutually beneficial situation in Three-Side Game.

  15. A Self-Organizing Incremental Neural Network based on local distribution learning.

    Science.gov (United States)

    Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi

    2016-12-01

    In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Protein secondary structure prediction using modular reciprocal bidirectional recurrent neural networks.

    Science.gov (United States)

    Babaei, Sepideh; Geranmayeh, Amir; Seyyedsalehi, Seyyed Ali

    2010-12-01

    The supervised learning of recurrent neural networks well-suited for prediction of protein secondary structures from the underlying amino acids sequence is studied. Modular reciprocal recurrent neural networks (MRR-NN) are proposed to model the strong correlations between adjacent secondary structure elements. Besides, a multilayer bidirectional recurrent neural network (MBR-NN) is introduced to capture the long-range intramolecular interactions between amino acids in formation of the secondary structure. The final modular prediction system is devised based on the interactive integration of the MRR-NN and the MBR-NN structures to arbitrarily engage the neighboring effects of the secondary structure types concurrent with memorizing the sequential dependencies of amino acids along the protein chain. The advanced combined network augments the percentage accuracy (Q₃) to 79.36% and boosts the segment overlap (SOV) up to 70.09% when tested on the PSIPRED dataset in three-fold cross-validation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  17. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  18. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  19. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  20. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    Directory of Open Access Journals (Sweden)

    Mehrshad Salmasi

    2012-07-01

    Full Text Available Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in noise attenuation are compared. We use Elman network as a recurrent neural network. For simulations, noise signals from a SPIB database are used. In order to compare the networks appropriately, equal number of layers and neurons are considered for the networks. Moreover, training and test samples are similar. Simulation results show that feedforward and recurrent neural networks present good performance in noise cancellation. As it is seen, the ability of recurrent neural network in noise attenuation is better than feedforward network.

  1. Statistical downscaling of precipitation using long short-term memory recurrent neural networks

    Science.gov (United States)

    Misra, Saptarshi; Sarkar, Sudeshna; Mitra, Pabitra

    2017-11-01

    Hydrological impacts of global climate change on regional scale are generally assessed by downscaling large-scale climatic variables, simulated by General Circulation Models (GCMs), to regional, small-scale hydrometeorological variables like precipitation, temperature, etc. In this study, we propose a new statistical downscaling model based on Recurrent Neural Network with Long Short-Term Memory which captures the spatio-temporal dependencies in local rainfall. The previous studies have used several other methods such as linear regression, quantile regression, kernel regression, beta regression, and artificial neural networks. Deep neural networks and recurrent neural networks have been shown to be highly promising in modeling complex and highly non-linear relationships between input and output variables in different domains and hence we investigated their performance in the task of statistical downscaling. We have tested this model on two datasets—one on precipitation in Mahanadi basin in India and the second on precipitation in Campbell River basin in Canada. Our autoencoder coupled long short-term memory recurrent neural network model performs the best compared to other existing methods on both the datasets with respect to temporal cross-correlation, mean squared error, and capturing the extremes.

  2. Self-organizing sensing and actuation for automatic control

    Science.gov (United States)

    Cheng, George Shu-Xing

    2017-07-04

    A Self-Organizing Process Control Architecture is introduced with a Sensing Layer, Control Layer, Actuation Layer, Process Layer, as well as Self-Organizing Sensors (SOS) and Self-Organizing Actuators (SOA). A Self-Organizing Sensor for a process variable with one or multiple input variables is disclosed. An artificial neural network (ANN) based dynamic modeling mechanism as part of the Self-Organizing Sensor is described. As a case example, a Self-Organizing Soft-Sensor for CFB Boiler Bed Height is presented. Also provided is a method to develop a Self-Organizing Sensor.

  3. Recurrent Neural Network Approach Based on the Integral Representation of the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Živković, Ivan S; Wei, Yimin

    2015-10-01

    In this letter, we present the dynamical equation and corresponding artificial recurrent neural network for computing the Drazin inverse for arbitrary square real matrix, without any restriction on its eigenvalues. Conditions that ensure the stability of the defined recurrent neural network as well as its convergence toward the Drazin inverse are considered. Several illustrative examples present the results of computer simulations.

  4. Multistability of delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions

    International Nuclear Information System (INIS)

    Huang Yu-Jiao; Hu Hai-Gen

    2015-01-01

    In this paper, the multistability issue is discussed for delayed complex-valued recurrent neural networks with discontinuous real-imaginary-type activation functions. Based on a fixed theorem and stability definition, sufficient criteria are established for the existence and stability of multiple equilibria of complex-valued recurrent neural networks. The number of stable equilibria is larger than that of real-valued recurrent neural networks, which can be used to achieve high-capacity associative memories. One numerical example is provided to show the effectiveness and superiority of the presented results. (paper)

  5. A novel nonlinear adaptive filter using a pipelined second-order Volterra recurrent neural network.

    Science.gov (United States)

    Zhao, Haiquan; Zhang, Jiashu

    2009-12-01

    To enhance the performance and overcome the heavy computational complexity of recurrent neural networks (RNN), a novel nonlinear adaptive filter based on a pipelined second-order Volterra recurrent neural network (PSOVRNN) is proposed in this paper. A modified real-time recurrent learning (RTRL) algorithm of the proposed filter is derived in much more detail. The PSOVRNN comprises of a number of simple small-scale second-order Volterra recurrent neural network (SOVRNN) modules. In contrast to the standard RNN, these modules of a PSOVRNN can be performed simultaneously in a pipelined parallelism fashion, which can lead to a significant improvement in its total computational efficiency. Moreover, since each module of the PSOVRNN is a SOVRNN in which nonlinearity is introduced by the recursive second-order Volterra (RSOV) expansion, its performance can be further improved. Computer simulations have demonstrated that the PSOVRNN performs better than the pipelined recurrent neural network (PRNN) and RNN for nonlinear colored signals prediction and nonlinear channel equalization. However, the superiority of the PSOVRNN over the PRNN is at the cost of increasing computational complexity due to the introduced nonlinear expansion of each module.

  6. A novel joint-processing adaptive nonlinear equalizer using a modular recurrent neural network for chaotic communication systems.

    Science.gov (United States)

    Zhao, Haiquan; Zeng, Xiangping; Zhang, Jiashu; Liu, Yangguang; Wang, Xiaomin; Li, Tianrui

    2011-01-01

    To eliminate nonlinear channel distortion in chaotic communication systems, a novel joint-processing adaptive nonlinear equalizer based on a pipelined recurrent neural network (JPRNN) is proposed, using a modified real-time recurrent learning (RTRL) algorithm. Furthermore, an adaptive amplitude RTRL algorithm is adopted to overcome the deteriorating effect introduced by the nesting process. Computer simulations illustrate that the proposed equalizer outperforms the pipelined recurrent neural network (PRNN) and recurrent neural network (RNN) equalizers. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Global exponential stability and periodicity of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions

    International Nuclear Information System (INIS)

    Lu Junguo

    2008-01-01

    In this paper, the global exponential stability and periodicity for a class of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions are addressed by constructing suitable Lyapunov functionals and utilizing some inequality techniques. We first prove global exponential converge to 0 of the difference between any two solutions of the original reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions, the existence and uniqueness of equilibrium is the direct results of this procedure. This approach is different from the usually used one where the existence, uniqueness of equilibrium and stability are proved in two separate steps. Furthermore, we prove periodicity of the reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions. Sufficient conditions ensuring the global exponential stability and the existence of periodic oscillatory solutions for the reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions are given. These conditions are easy to check and have important leading significance in the design and application of reaction-diffusion recurrent neural networks with delays. Finally, two numerical examples are given to show the effectiveness of the obtained results

  8. Global robust stability of delayed recurrent neural networks

    International Nuclear Information System (INIS)

    Cao Jinde; Huang Deshuang; Qu Yuzhong

    2005-01-01

    This paper is concerned with the global robust stability of a class of delayed interval recurrent neural networks which contain time-invariant uncertain parameters whose values are unknown but bounded in given compact sets. A new sufficient condition is presented for the existence, uniqueness, and global robust stability of equilibria for interval neural networks with time delays by constructing Lyapunov functional and using matrix-norm inequality. An error is corrected in an earlier publication, and an example is given to show the effectiveness of the obtained results

  9. Covalent growth factor tethering to direct neural stem cell differentiation and self-organization.

    Science.gov (United States)

    Ham, Trevor R; Farrag, Mahmoud; Leipzig, Nic D

    2017-04-15

    Tethered growth factors offer exciting new possibilities for guiding stem cell behavior. However, many of the current methods present substantial drawbacks which can limit their application and confound results. In this work, we developed a new method for the site-specific covalent immobilization of azide-tagged growth factors and investigated its utility in a model system for guiding neural stem cell (NSC) behavior. An engineered interferon-γ (IFN-γ) fusion protein was tagged with an N-terminal azide group, and immobilized to two different dibenzocyclooctyne-functionalized biomimetic polysaccharides (chitosan and hyaluronan). We successfully immobilized azide-tagged IFN-γ under a wide variety of reaction conditions, both in solution and to bulk hydrogels. To understand the interplay between surface chemistry and protein immobilization, we cultured primary rat NSCs on both materials and showed pronounced biological effects. Expectedly, immobilized IFN-γ increased neuronal differentiation on both materials. Expression of other lineage markers varied depending on the material, suggesting that the interplay of surface chemistry and protein immobilization plays a large role in nuanced cell behavior. We also investigated the bioactivity of immobilized IFN-γ in a 3D environment in vivo and found that it sparked the robust formation of neural tube-like structures from encapsulated NSCs. These findings support a wide range of potential uses for this approach and provide further evidence that adult NSCs are capable of self-organization when exposed to the proper microenvironment. For stem cells to be used effectively in regenerative medicine applications, they must be provided with the appropriate cues and microenvironment so that they integrate with existing tissue. This study explores a new method for guiding stem cell behavior: covalent growth factor tethering. We found that adding an N-terminal azide-tag to interferon-γ enabled stable and robust Cu-free 'click

  10. A novel prosodic-information synthesizer based on recurrent fuzzy neural network for the Chinese TTS system.

    Science.gov (United States)

    Lin, Chin-Teng; Wu, Rui-Cheng; Chang, Jyh-Yeong; Liang, Sheng-Fu

    2004-02-01

    In this paper, a new technique for the Chinese text-to-speech (TTS) system is proposed. Our major effort focuses on the prosodic information generation. New methodologies for constructing fuzzy rules in a prosodic model simulating human's pronouncing rules are developed. The proposed Recurrent Fuzzy Neural Network (RFNN) is a multilayer recurrent neural network (RNN) which integrates a Self-cOnstructing Neural Fuzzy Inference Network (SONFIN) into a recurrent connectionist structure. The RFNN can be functionally divided into two parts. The first part adopts the SONFIN as a prosodic model to explore the relationship between high-level linguistic features and prosodic information based on fuzzy inference rules. As compared to conventional neural networks, the SONFIN can always construct itself with an economic network size in high learning speed. The second part employs a five-layer network to generate all prosodic parameters by directly using the prosodic fuzzy rules inferred from the first part as well as other important features of syllables. The TTS system combined with the proposed method can behave not only sandhi rules but also the other prosodic phenomena existing in the traditional TTS systems. Moreover, the proposed scheme can even find out some new rules about prosodic phrase structure. The performance of the proposed RFNN-based prosodic model is verified by imbedding it into a Chinese TTS system with a Chinese monosyllable database based on the time-domain pitch synchronous overlap add (TD-PSOLA) method. Our experimental results show that the proposed RFNN can generate proper prosodic parameters including pitch means, pitch shapes, maximum energy levels, syllable duration, and pause duration. Some synthetic sounds are online available for demonstration.

  11. Implicit self-esteem in recurrently depressed patients

    NARCIS (Netherlands)

    Risch, A.K.; Bubal, A.; Birk, U.; Morina, N.; Steffens, M.C.; Stangier, U.

    2010-01-01

    Negative self-esteem is suggested to play an important role in the recurrence of depressive episodes. This study investigated whether repeated experiences of a negative view of the self within a recurrent course of depression might cause implicit self-esteem to be impaired and negative

  12. Global exponential stability of reaction-diffusion recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde

    2003-01-01

    Employing general Halanay inequality, we analyze the global exponential stability of a class of reaction-diffusion recurrent neural networks with time-varying delays. Several new sufficient conditions are obtained to ensure existence, uniqueness and global exponential stability of the equilibrium point of delayed reaction-diffusion recurrent neural networks. The results extend and improve the earlier publications. In addition, an example is given to show the effectiveness of the obtained result

  13. Land Cover Classification via Multitemporal Spatial Data by Deep Recurrent Neural Networks

    Science.gov (United States)

    Ienco, Dino; Gaetano, Raffaele; Dupaquier, Claire; Maurel, Pierre

    2017-10-01

    Nowadays, modern earth observation programs produce huge volumes of satellite images time series (SITS) that can be useful to monitor geographical areas through time. How to efficiently analyze such kind of information is still an open question in the remote sensing field. Recently, deep learning methods proved suitable to deal with remote sensing data mainly for scene classification (i.e. Convolutional Neural Networks - CNNs - on single images) while only very few studies exist involving temporal deep learning approaches (i.e Recurrent Neural Networks - RNNs) to deal with remote sensing time series. In this letter we evaluate the ability of Recurrent Neural Networks, in particular the Long-Short Term Memory (LSTM) model, to perform land cover classification considering multi-temporal spatial data derived from a time series of satellite images. We carried out experiments on two different datasets considering both pixel-based and object-based classification. The obtained results show that Recurrent Neural Networks are competitive compared to state-of-the-art classifiers, and may outperform classical approaches in presence of low represented and/or highly mixed classes. We also show that using the alternative feature representation generated by LSTM can improve the performances of standard classifiers.

  14. Optimization of recurrent neural networks for time series modeling

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...... series. The overall objective s are to improve training by application of second-order methods and to improve generalization ability by architecture optimization accomplished by pruning. The major topics covered in the thesis are: 1. The problem of training recurrent networks is analyzed from a numerical...... of solution obtained as well as computation time required. 3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability. 4. The viability of pruning recurrent networks by the Optimal...

  15. Acoustic Event Detection in Multichannel Audio Using Gated Recurrent Neural Networks with High‐Resolution Spectral Features

    Directory of Open Access Journals (Sweden)

    Hyoung‐Gook Kim

    2017-12-01

    Full Text Available Recently, deep recurrent neural networks have achieved great success in various machine learning tasks, and have also been applied for sound event detection. The detection of temporally overlapping sound events in realistic environments is much more challenging than in monophonic detection problems. In this paper, we present an approach to improve the accuracy of polyphonic sound event detection in multichannel audio based on gated recurrent neural networks in combination with auditory spectral features. In the proposed method, human hearing perception‐based spatial and spectral‐domain noise‐reduced harmonic features are extracted from multichannel audio and used as high‐resolution spectral inputs to train gated recurrent neural networks. This provides a fast and stable convergence rate compared to long short‐term memory recurrent neural networks. Our evaluation reveals that the proposed method outperforms the conventional approaches.

  16. A recurrent neural network with ever changing synapses

    NARCIS (Netherlands)

    Heerema, M.; van Leeuwen, W.A.

    2000-01-01

    A recurrent neural network with noisy input is studied analytically, on the basis of a Discrete Time Master Equation. The latter is derived from a biologically realizable learning rule for the weights of the connections. In a numerical study it is found that the fixed points of the dynamics of the

  17. Ideomotor feedback control in a recurrent neural network.

    Science.gov (United States)

    Galtier, Mathieu

    2015-06-01

    The architecture of a neural network controlling an unknown environment is presented. It is based on a randomly connected recurrent neural network from which both perception and action are simultaneously read and fed back. There are two concurrent learning rules implementing a sort of ideomotor control: (i) perception is learned along the principle that the network should predict reliably its incoming stimuli; (ii) action is learned along the principle that the prediction of the network should match a target time series. The coherent behavior of the neural network in its environment is a consequence of the interaction between the two principles. Numerical simulations show a promising performance of the approach, which can be turned into a local and better "biologically plausible" algorithm.

  18. Delay-Dependent Stability Criteria of Uncertain Periodic Switched Recurrent Neural Networks with Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Xing Yin

    2011-01-01

    uncertain periodic switched recurrent neural networks with time-varying delays. When uncertain discrete-time recurrent neural network is a periodic system, it is expressed as switched neural network for the finite switching state. Based on the switched quadratic Lyapunov functional approach (SQLF and free-weighting matrix approach (FWM, some linear matrix inequality criteria are found to guarantee the delay-dependent asymptotical stability of these systems. Two examples illustrate the exactness of the proposed criteria.

  19. Implementation of self-organizing neural networks for visuo-motor control of an industrial robot.

    Science.gov (United States)

    Walter, J A; Schulten, K I

    1993-01-01

    The implementation of two neural network algorithms for visuo-motor control of an industrial robot (Puma 562) is reported. The first algorithm uses a vector quantization technique, the ;neural-gas' network, together with an error correction scheme based on a Widrow-Hoff-type learning rule. The second algorithm employs an extended self-organizing feature map algorithm. Based on visual information provided by two cameras, the robot learns to position its end effector without an external teacher. Within only 3000 training steps, the robot-camera system is capable of reducing the positioning error of the robot's end effector to approximately 0.1% of the linear dimension of the work space. By employing adaptive feedback the robot succeeds in compensating not only slow calibration drifts, but also sudden changes in its geometry. Hardware aspects of the robot-camera system are discussed.

  20. Neural constructivism or self-organization?

    NARCIS (Netherlands)

    van der Maas, H.L.J.; Molenaar, P.C.M.

    2000-01-01

    Comments on the article by S. R. Quartz et al (see record 1998-00749-001) which discussed the constructivist perspective of interaction between cognition and neural processes during development and consequences for theories of learning. Three arguments are given to show that neural constructivism

  1. Integrated built-in-test false and missed alarms reduction based on forward infinite impulse response & recurrent finite impulse response dynamic neural networks

    Science.gov (United States)

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2017-11-01

    Built-in tests (BITs) are widely used in mechanical systems to perform state identification, whereas the BIT false and missed alarms cause trouble to the operators or beneficiaries to make correct judgments. Artificial neural networks (ANN) are previously used for false and missed alarms identification, which has the features such as self-organizing and self-study. However, these ANN models generally do not incorporate the temporal effect of the bottom-level threshold comparison outputs and the historical temporal features are not fully considered. To improve the situation, this paper proposes a new integrated BIT design methodology by incorporating a novel type of dynamic neural networks (DNN) model. The new DNN model is termed as Forward IIR & Recurrent FIR DNN (FIRF-DNN), where its component neurons, network structures, and input/output relationships are discussed. The condition monitoring false and missed alarms reduction implementation scheme based on FIRF-DNN model is also illustrated, which is composed of three stages including model training, false and missed alarms detection, and false and missed alarms suppression. Finally, the proposed methodology is demonstrated in the application study and the experimental results are analyzed.

  2. A one-layer recurrent neural network for constrained nonsmooth optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-10-01

    This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.

  3. A Recurrent Neural Network for Nonlinear Fractional Programming

    Directory of Open Access Journals (Sweden)

    Quan-Ju Zhang

    2012-01-01

    Full Text Available This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.

  4. Stochastic Oscillation in Self-Organized Critical States of Small Systems: Sensitive Resting State in Neural Systems.

    Science.gov (United States)

    Wang, Sheng-Jun; Ouyang, Guang; Guang, Jing; Zhang, Mingsha; Wong, K Y Michael; Zhou, Changsong

    2016-01-08

    Self-organized critical states (SOCs) and stochastic oscillations (SOs) are simultaneously observed in neural systems, which appears to be theoretically contradictory since SOCs are characterized by scale-free avalanche sizes but oscillations indicate typical scales. Here, we show that SOs can emerge in SOCs of small size systems due to temporal correlation between large avalanches at the finite-size cutoff, resulting from the accumulation-release process in SOCs. In contrast, the critical branching process without accumulation-release dynamics cannot exhibit oscillations. The reconciliation of SOCs and SOs is demonstrated both in the sandpile model and robustly in biologically plausible neuronal networks. The oscillations can be suppressed if external inputs eliminate the prominent slow accumulation process, providing a potential explanation of the widely studied Berger effect or event-related desynchronization in neural response. The features of neural oscillations and suppression are confirmed during task processing in monkey eye-movement experiments. Our results suggest that finite-size, columnar neural circuits may play an important role in generating neural oscillations around the critical states, potentially enabling functional advantages of both SOCs and oscillations for sensitive response to transient stimuli.

  5. Self-organizing networks for extracting jet features

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Pi, H.; Roegnvaldsson, T.

    1991-01-01

    Self-organizing neural networks are briefly reviewed and compared with supervised learning algorithms like back-propagation. The power of self-organization networks is in their capability of displaying typical features in a transparent manner. This is successfully demonstrated with two applications from hadronic jet physics; hadronization model discrimination and separation of b.c. and light quarks. (orig.)

  6. Artificial neural network with self-organizing mapping for reactor stability monitoring

    International Nuclear Information System (INIS)

    Okumura, Motofumi; Tsuji, Masashi; Shimazu, Yoichiro; Narabayashi, Tadashi

    2008-01-01

    In BWR stability monitoring damping ratio has been used as a stability index. A method for estimating the damping ratio by applying Principal Component Analysis (PCA) to neutron detector signals measured with local power range monitors (LPRMs) had been developed; In this method, measured fluctuating signal is decomposed into some independent components and the signal component directly related to stability is extracted among them to determine the damping ratio. For online monitoring, it is necessary to select stability related signal component efficiently. The self-organizing map (SOM) is one of the artificial neural networks and has the characteristics such that online learning is possible without supervised learning within a relatively short time. In the present study, the SOM was applied to extract the relevant signal component more quickly and more accurately, and the availability was confirmed through the feasibility study. (author)

  7. Railway track circuit fault diagnosis using recurrent neural networks

    NARCIS (Netherlands)

    de Bruin, T.D.; Verbert, K.A.J.; Babuska, R.

    2017-01-01

    Timely detection and identification of faults in railway track circuits are crucial for the safety and availability of railway networks. In this paper, the use of the long-short-term memory (LSTM) recurrent neural network is proposed to accomplish these tasks based on the commonly available

  8. Medical Concept Normalization in Social Media Posts with Recurrent Neural Networks.

    Science.gov (United States)

    Tutubalina, Elena; Miftahutdinov, Zulfat; Nikolenko, Sergey; Malykh, Valentin

    2018-06-12

    Text mining of scientific libraries and social media has already proven itself as a reliable tool for drug repurposing and hypothesis generation. The task of mapping a disease mention to a concept in a controlled vocabulary, typically to the standard thesaurus in the Unified Medical Language System (UMLS), is known as medical concept normalization. This task is challenging due to the differences in the use of medical terminology between health care professionals and social media texts coming from the lay public. To bridge this gap, we use sequence learning with recurrent neural networks and semantic representation of one- or multi-word expressions: we develop end-to-end architectures directly tailored to the task, including bidirectional Long Short-Term Memory, Gated Recurrent Units with an attention mechanism, and additional semantic similarity features based on UMLS. Our evaluation against a standard benchmark shows that recurrent neural networks improve results over an effective baseline for classification based on convolutional neural networks. A qualitative examination of mentions discovered in a dataset of user reviews collected from popular online health information platforms as well as a quantitative evaluation both show improvements in the semantic representation of health-related expressions in social media. Copyright © 2018. Published by Elsevier Inc.

  9. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  10. Fine-tuning and the stability of recurrent neural networks.

    Directory of Open Access Journals (Sweden)

    David MacNeil

    Full Text Available A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems.

  11. A recurrent neural network based on projection operator for extended general variational inequalities.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde

    2010-06-01

    Based on the projection operator, a recurrent neural network is proposed for solving extended general variational inequalities (EGVIs). Sufficient conditions are provided to ensure the global convergence of the proposed neural network based on Lyapunov methods. Compared with the existing neural networks for variational inequalities, the proposed neural network is a modified version of the general projection neural network existing in the literature and capable of solving the EGVI problems. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed neural network.

  12. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  13. Recurrent Neural Network Based Boolean Factor Analysis and its Application to Word Clustering

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2009-01-01

    Roč. 20, č. 7 (2009), s. 1073-1086 ISSN 1045-9227 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.889, year: 2009

  14. Bayesian model ensembling using meta-trained recurrent neural networks

    NARCIS (Netherlands)

    Ambrogioni, L.; Berezutskaya, Y.; Gü ç lü , U.; Borne, E.W.P. van den; Gü ç lü tü rk, Y.; Gerven, M.A.J. van; Maris, E.G.G.

    2017-01-01

    In this paper we demonstrate that a recurrent neural network meta-trained on an ensemble of arbitrary classification tasks can be used as an approximation of the Bayes optimal classifier. This result is obtained by relying on the framework of e-free approximate Bayesian inference, where the Bayesian

  15. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    Science.gov (United States)

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  16. A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization.

    Science.gov (United States)

    Qin, Sitian; Feng, Jiqiang; Song, Jiahui; Wen, Xingnan; Xu, Chen

    2018-03-01

    In this paper, based on calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.

  17. Bi-directional LSTM Recurrent Neural Network for Chinese Word Segmentation

    OpenAIRE

    Yao, Yushi; Huang, Zheng

    2016-01-01

    Recurrent neural network(RNN) has been broadly applied to natural language processing(NLP) problems. This kind of neural network is designed for modeling sequential data and has been testified to be quite efficient in sequential tagging tasks. In this paper, we propose to use bi-directional RNN with long short-term memory(LSTM) units for Chinese word segmentation, which is a crucial preprocess task for modeling Chinese sentences and articles. Classical methods focus on designing and combining...

  18. A one-layer recurrent neural network for constrained nonconvex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2015-01-01

    In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush-Kuhn-Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.

  19. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  20. A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Problems With Equality and Inequality Constraints.

    Science.gov (United States)

    Qin, Sitian; Yang, Xiudong; Xue, Xiaoping; Song, Jiahui

    2017-10-01

    Pseudoconvex optimization problem, as an important nonconvex optimization problem, plays an important role in scientific and engineering applications. In this paper, a recurrent one-layer neural network is proposed for solving the pseudoconvex optimization problem with equality and inequality constraints. It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter. It is also proved that the state of the proposed neural network is convergent to an optimal solution of the related problem. Compared with the related existing recurrent neural networks for the pseudoconvex optimization problems, the proposed neural network in this paper does not need the penalty parameters and has a better convergence. Meanwhile, the proposed neural network is used to solve three nonsmooth optimization problems, and we make some detailed comparisons with the known related conclusions. In the end, some numerical examples are provided to illustrate the effectiveness of the performance of the proposed neural network.

  1. Analysis of Recurrent Analog Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    1998-06-01

    Full Text Available In this paper, an original rigorous analysis of recurrent analog neural networks, which are built from opamp neurons, is presented. The analysis, which comes from the approximate model of the operational amplifier, reveals causes of possible non-stable states and enables to determine convergence properties of the network. Results of the analysis are discussed in order to enable development of original robust and fast analog networks. In the analysis, the special attention is turned to the examination of the influence of real circuit elements and of the statistical parameters of processed signals to the parameters of the network.

  2. Growing hierarchical probabilistic self-organizing graphs.

    Science.gov (United States)

    López-Rubio, Ezequiel; Palomo, Esteban José

    2011-07-01

    Since the introduction of the growing hierarchical self-organizing map, much work has been done on self-organizing neural models with a dynamic structure. These models allow adjusting the layers of the model to the features of the input dataset. Here we propose a new self-organizing model which is based on a probabilistic mixture of multivariate Gaussian components. The learning rule is derived from the stochastic approximation framework, and a probabilistic criterion is used to control the growth of the model. Moreover, the model is able to adapt to the topology of each layer, so that a hierarchy of dynamic graphs is built. This overcomes the limitations of the self-organizing maps with a fixed topology, and gives rise to a faithful visualization method for high-dimensional data.

  3. Bifurcation analysis on a generalized recurrent neural network with two interconnected three-neuron components

    International Nuclear Information System (INIS)

    Hajihosseini, Amirhossein; Maleki, Farzaneh; Rokni Lamooki, Gholam Reza

    2011-01-01

    Highlights: → We construct a recurrent neural network by generalizing a specific n-neuron network. → Several codimension 1 and 2 bifurcations take place in the newly constructed network. → The newly constructed network has higher capabilities to learn periodic signals. → The normal form theorem is applied to investigate dynamics of the network. → A series of bifurcation diagrams is given to support theoretical results. - Abstract: A class of recurrent neural networks is constructed by generalizing a specific class of n-neuron networks. It is shown that the newly constructed network experiences generic pitchfork and Hopf codimension one bifurcations. It is also proved that the emergence of generic Bogdanov-Takens, pitchfork-Hopf and Hopf-Hopf codimension two, and the degenerate Bogdanov-Takens bifurcation points in the parameter space is possible due to the intersections of codimension one bifurcation curves. The occurrence of bifurcations of higher codimensions significantly increases the capability of the newly constructed recurrent neural network to learn broader families of periodic signals.

  4. Modeling of biologically motivated self-learning equivalent-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for image fragments clustering and recognition

    Science.gov (United States)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2018-03-01

    The biologically-motivated self-learning equivalence-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for fragments images clustering and recognition will be discussed. We shall consider these neural structures and their spatial-invariant equivalental models (SIEMs) based on proposed equivalent two-dimensional functions of image similarity and the corresponding matrix-matrix (or tensor) procedures using as basic operations of continuous logic and nonlinear processing. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalent weighing of input patterns. We show that these SL_EC_RMNSs have several advantages, such as the self-study and self-identification of features and signs of the similarity of fragments, ability to clustering and recognize of image fragments with best efficiency and strong mutual correlation. The proposed combined with learning-recognition clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively continuous logic and nonlinear processing algorithms and to k-average method or method the winner takes all (WTA). The experimental results confirmed that fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an images of different dimensions (a reference

  5. Probing the basins of attraction of a recurrent neural network

    NARCIS (Netherlands)

    Heerema, M.; van Leeuwen, W.A.

    2000-01-01

    Analytical expressions for the weights $w_{ij}(b)$ of the connections of a recurrent neural network are found by taking explicitly into account basins of attraction, the size of which is characterized by a basin parameter $b$. It is shown that a network with $b \

  6. Active Control of Sound based on Diagonal Recurrent Neural Network

    NARCIS (Netherlands)

    Jayawardhana, Bayu; Xie, Lihua; Yuan, Shuqing

    2002-01-01

    Recurrent neural network has been known for its dynamic mapping and better suited for nonlinear dynamical system. Nonlinear controller may be needed in cases where the actuators exhibit the nonlinear characteristics, or in cases when the structure to be controlled exhibits nonlinear behavior. The

  7. Usage of self-organizing neural networks in evaluation of consumer behaviour

    Directory of Open Access Journals (Sweden)

    Jana Weinlichová

    2010-01-01

    Full Text Available This article deals with evaluation of consumer data by Artificial Intelligence methods. In methodical part there are described learning algorithms for Kohonen maps on the principle of supervised learning, unsupervised learning and semi-supervised learning. The principles of supervised learning and unsupervised learning are compared. On base of binding conditions of these principles there is pointed out an advantage of semi-supervised learning. Three algorithms are described for the semi-supervised learning: label propagation, self-training and co-training. Especially usage of co-training in Kohonen map learning seems to be promising point of other research. In concrete application of Kohonen neural network on consumer’s expense the unsupervised learning method has been chosen – the self-organization. So the features of data are evaluated by clustering method called Kohonen maps. These input data represents consumer expenses of households in countries of European union and are characterised by 12-dimension vector according to commodity classification. The data are evaluated in several years, so we can see their distribution, similarity or dissimilarity and also their evolution. In the article we discus other usage of this method for this type of data and also comparison of our results with results reached by hierarchical cluster analysis.

  8. Predicting local field potentials with recurrent neural networks.

    Science.gov (United States)

    Kim, Louis; Harer, Jacob; Rangamani, Akshay; Moran, James; Parks, Philip D; Widge, Alik; Eskandar, Emad; Dougherty, Darin; Chin, Sang Peter

    2016-08-01

    We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.

  9. Adaptive Filtering Using Recurrent Neural Networks

    Science.gov (United States)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  10. ReSeg: A Recurrent Neural Network-Based Model for Semantic Segmentation

    OpenAIRE

    Visin, Francesco; Ciccone, Marco; Romero, Adriana; Kastner, Kyle; Cho, Kyunghyun; Bengio, Yoshua; Matteucci, Matteo; Courville, Aaron

    2015-01-01

    We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally ...

  11. Individual Identification Using Functional Brain Fingerprint Detected by Recurrent Neural Network.

    Science.gov (United States)

    Chen, Shiyang; Hu, Xiaoping P

    2018-03-20

    Individual identification based on brain function has gained traction in literature. Investigating individual differences in brain function can provide additional insights into the brain. In this work, we introduce a recurrent neural network based model for identifying individuals based on only a short segment of resting state functional MRI data. In addition, we demonstrate how the global signal and differences in atlases affect the individual identifiability. Furthermore, we investigate neural network features that exhibit the uniqueness of each individual. The results indicate that our model is able to identify individuals based on neural features and provides additional information regarding brain dynamics.

  12. Boundedness and stability for recurrent neural networks with variable coefficients and time-varying delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde

    2003-01-01

    In this Letter, the problems of boundedness and stability for a general class of non-autonomous recurrent neural networks with variable coefficients and time-varying delays are analyzed via employing Young inequality technique and Lyapunov method. Some simple sufficient conditions are given for boundedness and stability of the solutions for the recurrent neural networks. These results generalize and improve the previous works, and they are easy to check and apply in practice. Two illustrative examples and their numerical simulations are also given to demonstrate the effectiveness of the proposed results

  13. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    Directory of Open Access Journals (Sweden)

    Erik Marchi

    2017-01-01

    Full Text Available In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.

  14. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection.

    Science.gov (United States)

    Marchi, Erik; Vesperini, Fabio; Squartini, Stefano; Schuller, Björn

    2017-01-01

    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F -measure over the three databases.

  15. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Science.gov (United States)

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  16. Bach in 2014: Music Composition with Recurrent Neural Network

    OpenAIRE

    Liu, I-Ting; Ramakrishnan, Bhiksha

    2014-01-01

    We propose a framework for computer music composition that uses resilient propagation (RProp) and long short term memory (LSTM) recurrent neural network. In this paper, we show that LSTM network learns the structure and characteristics of music pieces properly by demonstrating its ability to recreate music. We also show that predicting existing music using RProp outperforms Back propagation through time (BPTT).

  17. Convolutional neural networks for prostate cancer recurrence prediction

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Arora, Ashish; Kumar, Abhay; Gupta, Sanchit; Sethi, Amit; Gann, Peter H.

    2017-03-01

    Accurate prediction of the treatment outcome is important for cancer treatment planning. We present an approach to predict prostate cancer (PCa) recurrence after radical prostatectomy using tissue images. We used a cohort whose case vs. control (recurrent vs. non-recurrent) status had been determined using post-treatment follow up. Further, to aid the development of novel biomarkers of PCa recurrence, cases and controls were paired based on matching of other predictive clinical variables such as Gleason grade, stage, age, and race. For this cohort, tissue resection microarray with up to four cores per patient was available. The proposed approach is based on deep learning, and its novelty lies in the use of two separate convolutional neural networks (CNNs) - one to detect individual nuclei even in the crowded areas, and the other to classify them. To detect nuclear centers in an image, the first CNN predicts distance transform of the underlying (but unknown) multi-nuclear map from the input HE image. The second CNN classifies the patches centered at nuclear centers into those belonging to cases or controls. Voting across patches extracted from image(s) of a patient yields the probability of recurrence for the patient. The proposed approach gave 0.81 AUC for a sample of 30 recurrent cases and 30 non-recurrent controls, after being trained on an independent set of 80 case-controls pairs. If validated further, such an approach might help in choosing between a combination of treatment options such as active surveillance, radical prostatectomy, radiation, and hormone therapy. It can also generalize to the prediction of treatment outcomes in other cancers.

  18. Neural Responses to Heartbeats in the Default Network Encode the Self in Spontaneous Thoughts

    Science.gov (United States)

    Babo-Rebelo, Mariana; Richter, Craig G.

    2016-01-01

    The default network (DN) has been consistently associated with self-related cognition, but also to bodily state monitoring and autonomic regulation. We hypothesized that these two seemingly disparate functional roles of the DN are functionally coupled, in line with theories proposing that selfhood is grounded in the neural monitoring of internal organs, such as the heart. We measured with magnetoencephalograhy neural responses evoked by heartbeats while human participants freely mind-wandered. When interrupted by a visual stimulus at random intervals, participants scored the self-relatedness of the interrupted thought. They evaluated their involvement as the first-person perspective subject or agent in the thought (“I”), and on another scale to what degree they were thinking about themselves (“Me”). During the interrupted thought, neural responses to heartbeats in two regions of the DN, the ventral precuneus and the ventromedial prefrontal cortex, covaried, respectively, with the “I” and the “Me” dimensions of the self, even at the single-trial level. No covariation between self-relatedness and peripheral autonomic measures (heart rate, heart rate variability, pupil diameter, electrodermal activity, respiration rate, and phase) or alpha power was observed. Our results reveal a direct link between selfhood and neural responses to heartbeats in the DN and thus directly support theories grounding selfhood in the neural monitoring of visceral inputs. More generally, the tight functional coupling between self-related processing and cardiac monitoring observed here implies that, even in the absence of measured changes in peripheral bodily measures, physiological and cognitive functions have to be considered jointly in the DN. SIGNIFICANCE STATEMENT The default network (DN) has been consistently associated with self-processing but also with autonomic regulation. We hypothesized that these two functions could be functionally coupled in the DN, inspired by

  19. A one-layer recurrent neural network for constrained nonsmooth invex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2014-02-01

    Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    Science.gov (United States)

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  1. Self-organized criticality in a network of interacting neurons

    NARCIS (Netherlands)

    Cowan, J.D.; Neuman, J.; Kiewiet, B.; van Drongelen, W.

    2013-01-01

    This paper contains an analysis of a simple neural network that exhibits self-organized criticality. Such criticality follows from the combination of a simple neural network with an excitatory feedback loop that generates bistability, in combination with an anti-Hebbian synapse in its input pathway.

  2. Using a multi-state recurrent neural network to optimize loading patterns in BWRs

    International Nuclear Information System (INIS)

    Ortiz, Juan Jose; Requena, Ignacio

    2004-01-01

    A Multi-State Recurrent Neural Network is used to optimize Loading Patterns (LP) in BWRs. We have proposed an energy function that depends on fuel assembly positions and their nuclear cross sections to carry out optimisation. Multi-State Recurrent Neural Networks creates LPs that satisfy the Radial Power Peaking Factor and maximize the effective multiplication factor at the Beginning of the Cycle, and also satisfy the Minimum Critical Power Ratio and Maximum Linear Heat Generation Rate at the End of the Cycle, thereby maximizing the effective multiplication factor. In order to evaluate the LPs, we have used a trained back-propagation neural network to predict the parameter values, instead of using a reactor core simulator, which saved considerable computation time in the search process. We applied this method to find optimal LPs for five cycles of Laguna Verde Nuclear Power Plant (LVNPP) in Mexico

  3. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    OpenAIRE

    Mehrshad Salmasi; Homayoun Mahdavi-Nasab

    2012-01-01

    Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in n...

  4. A recurrent neural network for solving bilevel linear programming problem.

    Science.gov (United States)

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian

    2014-04-01

    In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.

  5. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    OpenAIRE

    Francisco Javier Ordóñez; Daniel Roggen

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we pro...

  6. Self-organized criticality in neural networks

    Science.gov (United States)

    Makarenkov, Vladimir I.; Kirillov, A. B.

    1991-08-01

    Possible mechanisms of creating different types of persistent states for informational processing are regarded. It is presented two origins of criticalities - self-organized and phase transition. A comparative analyses of their behavior is given. It is demonstrated that despite a likeness there are important differences. These differences can play a significant role to explain the physical issue of such highest functions of the brain as a short-term memory and attention. 1.

  7. Prediction of Bladder Cancer Recurrences Using Artificial Neural Networks

    Science.gov (United States)

    Zulueta Guerrero, Ekaitz; Garay, Naiara Telleria; Lopez-Guede, Jose Manuel; Vilches, Borja Ayerdi; Iragorri, Eider Egilegor; Castaños, David Lecumberri; de La Hoz Rastrollo, Ana Belén; Peña, Carlos Pertusa

    Even if considerable advances have been made in the field of early diagnosis, there is no simple, cheap and non-invasive method that can be applied to the clinical monitorisation of bladder cancer patients. Moreover, bladder cancer recurrences or the reappearance of the tumour after its surgical resection cannot be predicted in the current clinical setting. In this study, Artificial Neural Networks (ANN) were used to assess how different combinations of classical clinical parameters (stage-grade and age) and two urinary markers (growth factor and pro-inflammatory mediator) could predict post surgical recurrences in bladder cancer patients. Different ANN methods, input parameter combinations and recurrence related output variables were used and the resulting positive and negative prediction rates compared. MultiLayer Perceptron (MLP) was selected as the most predictive model and urinary markers showed the highest sensitivity, predicting correctly 50% of the patients that would recur in a 2 year follow-up period.

  8. Web server's reliability improvements using recurrent neural networks

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan

    2012-01-01

    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (t...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  9. Artificial neural network with self-organizing mapping for reactor stability monitoring

    International Nuclear Information System (INIS)

    Okumura, Motofumi; Tsuji, Masashi; Shimazu, Yoichiro

    2009-01-01

    In boiling water reactor (BWR) stability monitoring, damping ratio has been used as a stability index. A method for estimating the damping ratio by applying Principal Component Analysis (PCA) to neutron detector signals measured with local power range monitors (LPRMs) had been developed; in this method, measured fluctuating signal is decomposed into some independent components and the signal components directly related to stability are extracted among them to determine the damping ratio. For online monitoring, it is necessary to select stability related signal components efficiently. The self-organizing map (SOM) is one of the artificial neural networks (ANNs) and has the characteristics such that online learning is possible without supervised learning within a relatively short time. In the present study, the SOM was applied to extract the relevant signal components more quickly and more accurately, and the availability was confirmed through the feasibility study. For realizing online stability monitoring only with ANNs, another type of ANN that performs online processing of PCA was combined with SOM. And stability monitoring performance was investigated. (author)

  10. Evaluation of the cranial base in amnion rupture sequence involving the anterior neural tube: implications regarding recurrence risk.

    Science.gov (United States)

    Jones, Kenneth Lyons; Robinson, Luther K; Benirschke, Kurt

    2006-09-01

    Amniotic bands can cause disruption of the cranial end of the developing fetus, leading in some cases to a neural tube closure defect. Although recurrence for unaffected parents of an affected child with a defect in which the neural tube closed normally but was subsequently disrupted by amniotic bands is negligible; for a primary defect in closure of the neural tube to which amnion has subsequently adhered, recurrence risk is 1.7%. In that primary defects of neural tube closure are characterized by typical abnormalities of the base of the skull, evaluation of the cranial base in such fetuses provides an approach for making a distinction between these 2 mechanisms. This distinction has implications regarding recurrence risk. The skull base of 2 fetuses with amnion rupture sequence involving the cranial end of the neural tube were compared to that of 1 fetus with anencephaly as well as that of a structurally normal fetus. The skulls were cleaned, fixed in 10% formalin, recleaned, and then exposed to 10% KOH solution. After washing and recleaning, the skulls were exposed to hydrogen peroxide for bleaching and photography. Despite involvement of the anterior neural tube in both fetuses with amnion rupture sequence, in Case 3 the cranial base was normal while in Case 4 the cranial base was similar to that seen in anencephaly. This technique provides a method for determining the developmental pathogenesis of anterior neural tube defects in cases of amnion rupture sequence. As such, it provides information that can be used to counsel parents of affected children with respect to recurrence risk.

  11. CloudScan - A Configuration-Free Invoice Analysis System Using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Palm, Rasmus Berg; Winther, Ole; Laws, Florian

    2017-01-01

    We present CloudScan; an invoice analysis system that requires zero configuration or upfront annotation. In contrast to previous work, CloudScan does not rely on templates of invoice layout, instead it learns a single global model of invoices that naturally generalizes to unseen invoice layouts....... The model is trained using data automatically extracted from end-user provided feedback. This automatic training data extraction removes the requirement for users to annotate the data precisely. We describe a recurrent neural network model that can capture long range context and compare it to a baseline...... logistic regression model corresponding to the current CloudScan production system. We train and evaluate the system on 8 important fields using a dataset of 326,471 invoices. The recurrent neural network and baseline model achieve 0.891 and 0.887 average F1 scores respectively on seen invoice layouts...

  12. Encoding of phonology in a recurrent neural model of grounded speech

    NARCIS (Netherlands)

    Alishahi, Afra; Barking, Marie; Chrupala, Grzegorz; Levy, Roger; Specia, Lucia

    2017-01-01

    We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how

  13. Detection of nonstationary transition to synchronized states of a neural network using recurrence analyses

    Science.gov (United States)

    Budzinski, R. C.; Boaretto, B. R. R.; Prado, T. L.; Lopes, S. R.

    2017-07-01

    We study the stability of asymptotic states displayed by a complex neural network. We focus on the loss of stability of a stationary state of networks using recurrence quantifiers as tools to diagnose local and global stabilities as well as the multistability of a coupled neural network. Numerical simulations of a neural network composed of 1024 neurons in a small-world connection scheme are performed using the model of Braun et al. [Int. J. Bifurcation Chaos 08, 881 (1998), 10.1142/S0218127498000681], which is a modified model from the Hodgkin-Huxley model [J. Phys. 117, 500 (1952)]. To validate the analyses, the results are compared with those produced by Kuramoto's order parameter [Chemical Oscillations, Waves, and Turbulence (Springer-Verlag, Berlin Heidelberg, 1984)]. We show that recurrence tools making use of just integrated signals provided by the networks, such as local field potential (LFP) (LFP signals) or mean field values bring new results on the understanding of neural behavior occurring before the synchronization states. In particular we show the occurrence of different stationary and nonstationarity asymptotic states.

  14. Fast computation with spikes in a recurrent neural network

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.; Seung, H. Sebastian

    2002-01-01

    Neural networks with recurrent connections are sometimes regarded as too slow at computation to serve as models of the brain. Here we analytically study a counterexample, a network consisting of N integrate-and-fire neurons with self excitation, all-to-all inhibition, instantaneous synaptic coupling, and constant external driving inputs. When the inhibition and/or excitation are large enough, the network performs a winner-take-all computation for all possible external inputs and initial states of the network. The computation is done very quickly: As soon as the winner spikes once, the computation is completed since no other neurons will spike. For some initial states, the winner is the first neuron to spike, and the computation is done at the first spike of the network. In general, there are M potential winners, corresponding to the top M external inputs. When the external inputs are close in magnitude, M tends to be larger. If M>1, the selection of the actual winner is strongly influenced by the initial states. If a special relation between the excitation and inhibition is satisfied, the network always selects the neuron with the maximum external input as the winner

  15. Classification of conductance traces with recurrent neural networks

    Science.gov (United States)

    Lauritzen, Kasper P.; Magyarkuti, András; Balogh, Zoltán; Halbritter, András; Solomon, Gemma C.

    2018-02-01

    We present a new automated method for structural classification of the traces obtained in break junction experiments. Using recurrent neural networks trained on the traces of minimal cross-sectional area in molecular dynamics simulations, we successfully separate the traces into two classes: point contact or nanowire. This is done without any assumptions about the expected features of each class. The trained neural network is applied to experimental break junction conductance traces, and it separates the classes as well as the previously used experimental methods. The effect of using partial conductance traces is explored, and we show that the method performs equally well using full or partial traces (as long as the trace just prior to breaking is included). When only the initial part of the trace is included, the results are still better than random chance. Finally, we show that the neural network classification method can be used to classify experimental conductance traces without using simulated results for training, but instead training the network on a few representative experimental traces. This offers a tool to recognize some characteristic motifs of the traces, which can be hard to find by simple data selection algorithms.

  16. Direction-of-change forecasting using a volatility-based recurrent neural network

    NARCIS (Netherlands)

    Bekiros, S.D.; Georgoutsos, D.A.

    2008-01-01

    This paper investigates the profitability of a trading strategy, based on recurrent neural networks, that attempts to predict the direction-of-change of the market in the case of the NASDAQ composite index. The sample extends over the period 8 February 1971 to 7 April 1998, while the sub-period 8

  17. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits

    Science.gov (United States)

    2018-01-01

    Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. PMID:29408930

  18. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits.

    Directory of Open Access Journals (Sweden)

    Volker Pernice

    2018-02-01

    Full Text Available Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures-recurrent connections, shared feed-forward projections, and shared gain fluctuations-on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing.

  19. Estimating Ads’ Click through Rate with Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    Chen Qiao-Hong

    2016-01-01

    Full Text Available With the development of the Internet, online advertising spreads across every corner of the world, the ads' click through rate (CTR estimation is an important method to improve the online advertising revenue. Compared with the linear model, the nonlinear models can study much more complex relationships between a large number of nonlinear characteristics, so as to improve the accuracy of the estimation of the ads’ CTR. The recurrent neural network (RNN based on Long-Short Term Memory (LSTM is an improved model of the feedback neural network with ring structure. The model overcomes the problem of the gradient of the general RNN. Experiments show that the RNN based on LSTM exceeds the linear models, and it can effectively improve the estimation effect of the ads’ click through rate.

  20. A Novel Recurrent Neural Network for Manipulator Control With Improved Noise Tolerance.

    Science.gov (United States)

    Li, Shuai; Wang, Huanqing; Rafique, Muhammad Usman

    2017-04-12

    In this paper, we propose a novel recurrent neural network to resolve the redundancy of manipulators for efficient kinematic control in the presence of noises in a polynomial type. Leveraging the high-order derivative properties of polynomial noises, a deliberately devised neural network is proposed to eliminate the impact of noises and recover the accurate tracking of desired trajectories in workspace. Rigorous analysis shows that the proposed neural law stabilizes the system dynamics and the position tracking error converges to zero in the presence of noises. Extensive simulations verify the theoretical results. Numerical comparisons show that existing dual neural solutions lose stability when exposed to large constant noises or time-varying noises. In contrast, the proposed approach works well and has a low tracking error comparable to noise-free situations.

  1. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    Science.gov (United States)

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  2. Multi-stability and almost periodic solutions of a class of recurrent neural networks

    International Nuclear Information System (INIS)

    Liu Yiguang; You Zhisheng

    2007-01-01

    This paper studies multi-stability, existence of almost periodic solutions of a class of recurrent neural networks with bounded activation functions. After introducing a sufficient condition insuring multi-stability, many criteria guaranteeing existence of almost periodic solutions are derived using Mawhin's coincidence degree theory. All the criteria are constructed without assuming the activation functions are smooth, monotonic or Lipschitz continuous, and that the networks contains periodic variables (such as periodic coefficients, periodic inputs or periodic activation functions), so all criteria can be easily extended to fit many concrete forms of neural networks such as Hopfield neural networks, or cellular neural networks, etc. Finally, all kinds of simulations are employed to illustrate the criteria

  3. Study of hourly and daily solar irradiation forecast using diagonal recurrent wavelet neural networks

    International Nuclear Information System (INIS)

    Cao Jiacong; Lin Xingchun

    2008-01-01

    An accurate forecast of solar irradiation is required for various solar energy applications and environmental impact analyses in recent years. Comparatively, various irradiation forecast models based on artificial neural networks (ANN) perform much better in accuracy than many conventional prediction models. However, the forecast precision of most existing ANN based forecast models has not been satisfactory to researchers and engineers so far, and the generalization capability of these networks needs further improving. Combining the prominent dynamic properties of a recurrent neural network (RNN) with the enhanced ability of a wavelet neural network (WNN) in mapping nonlinear functions, a diagonal recurrent wavelet neural network (DRWNN) is newly established in this paper to perform fine forecasting of hourly and daily global solar irradiance. Some additional steps, e.g. applying historical information of cloud cover to sample data sets and the cloud cover from the weather forecast to network input, are adopted to help enhance the forecast precision. Besides, a specially scheduled two phase training algorithm is adopted. As examples, both hourly and daily irradiance forecasts are completed using sample data sets in Shanghai and Macau, and comparisons between irradiation models show that the DRWNN models are definitely more accurate

  4. Simultaneous multichannel signal transfers via chaos in a recurrent neural network.

    Science.gov (United States)

    Soma, Ken-ichiro; Mori, Ryota; Sato, Ryuichi; Furumai, Noriyuki; Nara, Shigetoshi

    2015-05-01

    We propose neural network model that demonstrates the phenomenon of signal transfer between separated neuron groups via other chaotic neurons that show no apparent correlations with the input signal. The model is a recurrent neural network in which it is supposed that synchronous behavior between small groups of input and output neurons has been learned as fragments of high-dimensional memory patterns, and depletion of neural connections results in chaotic wandering dynamics. Computer experiments show that when a strong oscillatory signal is applied to an input group in the chaotic regime, the signal is successfully transferred to the corresponding output group, although no correlation is observed between the input signal and the intermediary neurons. Signal transfer is also observed when multiple signals are applied simultaneously to separate input groups belonging to different memory attractors. In this sense simultaneous multichannel communications are realized, and the chaotic neural dynamics acts as a signal transfer medium in which the signal appears to be hidden.

  5. Self-organization via active exploration in robotic applications

    Science.gov (United States)

    Ogmen, H.; Prakash, R. V.

    1992-01-01

    We describe a neural network based robotic system. Unlike traditional robotic systems, our approach focussed on non-stationary problems. We indicate that self-organization capability is necessary for any system to operate successfully in a non-stationary environment. We suggest that self-organization should be based on an active exploration process. We investigated neural architectures having novelty sensitivity, selective attention, reinforcement learning, habit formation, flexible criteria categorization properties and analyzed the resulting behavior (consisting of an intelligent initiation of exploration) by computer simulations. While various computer vision researchers acknowledged recently the importance of active processes (Swain and Stricker, 1991), the proposed approaches within the new framework still suffer from a lack of self-organization (Aloimonos and Bandyopadhyay, 1987; Bajcsy, 1988). A self-organizing, neural network based robot (MAVIN) has been recently proposed (Baloch and Waxman, 1991). This robot has the capability of position, size rotation invariant pattern categorization, recognition and pavlovian conditioning. Our robot does not have initially invariant processing properties. The reason for this is the emphasis we put on active exploration. We maintain the point of view that such invariant properties emerge from an internalization of exploratory sensory-motor activity. Rather than coding the equilibria of such mental capabilities, we are seeking to capture its dynamics to understand on the one hand how the emergence of such invariances is possible and on the other hand the dynamics that lead to these invariances. The second point is crucial for an adaptive robot to acquire new invariances in non-stationary environments, as demonstrated by the inverting glass experiments of Helmholtz. We will introduce Pavlovian conditioning circuits in our future work for the precise objective of achieving the generation, coordination, and internalization

  6. A novel recurrent neural network with one neuron and finite-time convergence for k-winners-take-all operation.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Cao, Jinde

    2010-07-01

    In this paper, based on a one-neuron recurrent neural network, a novel k-winners-take-all ( k -WTA) network is proposed. Finite time convergence of the proposed neural network is proved using the Lyapunov method. The k-WTA operation is first converted equivalently into a linear programming problem. Then, a one-neuron recurrent neural network is proposed to get the kth or (k+1)th largest inputs of the k-WTA problem. Furthermore, a k-WTA network is designed based on the proposed neural network to perform the k-WTA operation. Compared with the existing k-WTA networks, the proposed network has simple structure and finite time convergence. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed k-WTA network.

  7. Classifying galaxy spectra at 0.5 < z < 1 with self-organizing maps

    Science.gov (United States)

    Rahmani, S.; Teimoorinia, H.; Barmby, P.

    2018-05-01

    The spectrum of a galaxy contains information about its physical properties. Classifying spectra using templates helps elucidate the nature of a galaxy's energy sources. In this paper, we investigate the use of self-organizing maps in classifying galaxy spectra against templates. We trained semi-supervised self-organizing map networks using a set of templates covering the wavelength range from far ultraviolet to near infrared. The trained networks were used to classify the spectra of a sample of 142 galaxies with 0.5 K-means clustering, a supervised neural network, and chi-squared minimization. Spectra corresponding to quiescent galaxies were more likely to be classified similarly by all methods while starburst spectra showed more variability. Compared to classification using chi-squared minimization or the supervised neural network, the galaxies classed together by the self-organizing map had more similar spectra. The class ordering provided by the one-dimensional self-organizing maps corresponds to an ordering in physical properties, a potentially important feature for the exploration of large datasets.

  8. Global stability of discrete-time recurrent neural networks with impulse effects

    International Nuclear Information System (INIS)

    Zhou, L; Li, C; Wan, J

    2008-01-01

    This paper formulates and studies a class of discrete-time recurrent neural networks with impulse effects. A stability criterion, which characterizes the effects of impulse and stability property of the corresponding impulse-free networks on the stability of the impulsive networks in an aggregate form, is established. Two simplified and numerically tractable criteria are also provided

  9. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Directory of Open Access Journals (Sweden)

    Francisco Javier Ordóñez

    2016-01-01

    Full Text Available Human activity recognition (HAR tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i is suitable for multimodal wearable sensors; (ii can perform sensor fusion naturally; (iii does not require expert knowledge in designing features; and (iv explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.

  10. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  11. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  12. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks.

    Science.gov (United States)

    Xia, Peng; Hu, Jie; Peng, Yinghong

    2017-10-25

    A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  13. A novel word spotting method based on recurrent neural networks.

    Science.gov (United States)

    Frinken, Volkmar; Fischer, Andreas; Manmatha, R; Bunke, Horst

    2012-02-01

    Keyword spotting refers to the process of retrieving all instances of a given keyword from a document. In the present paper, a novel keyword spotting method for handwritten documents is described. It is derived from a neural network-based system for unconstrained handwriting recognition. As such it performs template-free spotting, i.e., it is not necessary for a keyword to appear in the training set. The keyword spotting is done using a modification of the CTC Token Passing algorithm in conjunction with a recurrent neural network. We demonstrate that the proposed systems outperform not only a classical dynamic time warping-based approach but also a modern keyword spotting system, based on hidden Markov models. Furthermore, we analyze the performance of the underlying neural networks when using them in a recognition task followed by keyword spotting on the produced transcription. We point out the advantages of keyword spotting when compared to classic text line recognition.

  14. Self-organized computation with unreliable, memristive nanodevices

    International Nuclear Information System (INIS)

    Snider, G S

    2007-01-01

    Nanodevices have terrible properties for building Boolean logic systems: high defect rates, high variability, high death rates, drift, and (for the most part) only two terminals. Economical assembly requires that they be dynamical. We argue that strategies aimed at mitigating these limitations, such as defect avoidance/reconfiguration, or applying coding theory to circuit design, present severe scalability and reliability challenges. We instead propose to mitigate device shortcomings and exploit their dynamical character by building self-organizing, self-healing networks that implement massively parallel computations. The key idea is to exploit memristive nanodevice behavior to cheaply implement adaptive, recurrent networks, useful for complex pattern recognition problems. Pulse-based communication allows the designer to make trade-offs between power consumption and processing speed. Self-organization sidesteps the scalability issues of characterization, compilation and configuration. Network dynamics supplies a graceful response to device death. We present simulation results of such a network-a self-organized spatial filter array-that demonstrate its performance as a function of defects and device variation

  15. Self-organization of spatial patterning in human embryonic stem cells

    Science.gov (United States)

    Deglincerti, Alessia; Etoc, Fred; Ozair, M. Zeeshan; Brivanlou, Ali H.

    2017-01-01

    The developing embryo is a remarkable example of self-organization, where functional units are created in a complex spatio-temporal choreography. Recently, human embryonic stem cells (ESCs) have been used to recapitulate in vitro the self-organization programs that are executed in the embryo in vivo. This represents a unique opportunity to address self-organization in humans that is otherwise not addressable with current technologies. In this essay, we review the recent literature on self-organization of human ESCs, with a particular focus on two examples: formation of embryonic germ layers and neural rosettes. Intriguingly, both activation and elimination of TGFβ signaling can initiate self-organization, albeit with different molecular underpinnings. We discuss the mechanisms underlying the formation of these structures in vitro and explore future challenges in the field. PMID:26970615

  16. A NEW RECOGNITION TECHNIQUE NAMED SOMP BASED ON PALMPRINT USING NEURAL NETWORK BASED SELF ORGANIZING MAPS

    Directory of Open Access Journals (Sweden)

    A. S. Raja

    2012-08-01

    Full Text Available The word biometrics refers to the use of physiological or biological characteristics of human to recognize and verify the identity of an individual. Palmprint has become a new class of human biometrics for passive identification with uniqueness and stability. This is considered to be reliable due to the lack of expressions and the lesser effect of aging. In this manuscript a new Palmprint based biometric system based on neural networks self organizing maps (SOM is presented. The method is named as SOMP. The paper shows that the proposed SOMP method improves the performance and robustness of recognition. The proposed method is applied to a variety of datasets and the results are shown.

  17. Global exponential stability for reaction-diffusion recurrent neural networks with multiple time varying delays

    International Nuclear Information System (INIS)

    Lou, X.; Cui, B.

    2008-01-01

    In this paper we consider the problem of exponential stability for recurrent neural networks with multiple time varying delays and reaction-diffusion terms. The activation functions are supposed to be bounded and globally Lipschitz continuous. By means of Lyapunov functional, sufficient conditions are derived, which guarantee global exponential stability of the delayed neural network. Finally, a numerical example is given to show the correctness of our analysis. (author)

  18. A self-organized learning strategy for object recognition by an embedded line of attraction

    Science.gov (United States)

    Seow, Ming-Jung; Alex, Ann T.; Asari, Vijayan K.

    2012-04-01

    For humans, a picture is worth a thousand words, but to a machine, it is just a seemingly random array of numbers. Although machines are very fast and efficient, they are vastly inferior to humans for everyday information processing. Algorithms that mimic the way the human brain computes and learns may be the solution. In this paper we present a theoretical model based on the observation that images of similar visual perceptions reside in a complex manifold in an image space. The perceived features are often highly structured and hidden in a complex set of relationships or high-dimensional abstractions. To model the pattern manifold, we present a novel learning algorithm using a recurrent neural network. The brain memorizes information using a dynamical system made of interconnected neurons. Retrieval of information is accomplished in an associative sense. It starts from an arbitrary state that might be an encoded representation of a visual image and converges to another state that is stable. The stable state is what the brain remembers. In designing a recurrent neural network, it is usually of prime importance to guarantee the convergence in the dynamics of the network. We propose to modify this picture: if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented with an unknown encoded representation of a visual image belonging to a different category. That is, the identification of an instability mode is an indication that a presented pattern is far away from any stored pattern and therefore cannot be associated with current memories. These properties can be used to circumvent the plasticity-stability dilemma by using the fluctuating mode as an indicator to create new states. We capture this behavior using a novel neural architecture and learning algorithm, in which the system performs self-organization utilizing a stability mode and an instability mode for the dynamical system. Based

  19. Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words

    KAUST Repository

    Qaralleh, Esam

    2013-10-01

    Artificial neural networks have the abilities to learn by example and are capable of solving problems that are hard to solve using ordinary rule-based programming. They have many design parameters that affect their performance such as the number and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments techniques to tune the recurrent neural network used in an Arabic handwriting recognition system. We show that best results are achieved with three hidden layers and two subsampling layers. To tune the sizes of these five layers, we use fractional factorial experiment design to limit the number of experiments to a feasible number. Moreover, we replicate the experiment configuration multiple times to overcome the randomness in the training process. The accuracy and time measurements are analyzed and modeled. The two models are then used to locate network sizes that are on the Pareto optimal frontier. The approach described in this paper reduces the label error from 26.2% to 19.8%.

  20. 9th Workshop on Self-Organizing Maps

    CERN Document Server

    Príncipe, José; Zegers, Pablo

    2013-01-01

    Self-organizing maps (SOMs) were developed by Teuvo Kohonen in the early eighties. Since then more than 10,000 works have been based on SOMs. SOMs are unsupervised neural networks useful for clustering and visualization purposes. Many SOM applications have been developed in engineering and science, and other fields. This book contains refereed papers presented at the 9th Workshop on Self-Organizing Maps (WSOM 2012) held at the Universidad de Chile, Santiago, Chile, on December 12-14, 2012. The workshop brought together researchers and practitioners in the field of self-organizing systems. Among the book chapters there are excellent examples of the use of SOMs in agriculture, computer science, data visualization, health systems, economics, engineering, social sciences, text and image analysis, and time series analysis. Other chapters present the latest theoretical work on SOMs as well as Learning Vector Quantization (LVQ) methods.

  1. Self-Organization of Spatial Patterning in Human Embryonic Stem Cells.

    Science.gov (United States)

    Deglincerti, Alessia; Etoc, Fred; Ozair, M Zeeshan; Brivanlou, Ali H

    2016-01-01

    The developing embryo is a remarkable example of self-organization, where functional units are created in a complex spatiotemporal choreography. Recently, human embryonic stem cells (ESCs) have been used to recapitulate in vitro the self-organization programs that are executed in the embryo in vivo. This represents an unique opportunity to address self-organization in humans that is otherwise not addressable with current technologies. In this chapter, we review the recent literature on self-organization of human ESCs, with a particular focus on two examples: formation of embryonic germ layers and neural rosettes. Intriguingly, both activation and elimination of TGFβ signaling can initiate self-organization, albeit with different molecular underpinnings. We discuss the mechanisms underlying the formation of these structures in vitro and explore future challenges in the field. © 2016 Elsevier Inc. All rights reserved.

  2. Training the Recurrent neural network by the Fuzzy Min-Max algorithm for fault prediction

    International Nuclear Information System (INIS)

    Zemouri, Ryad; Racoceanu, Daniel; Zerhouni, Noureddine; Minca, Eugenia; Filip, Florin

    2009-01-01

    In this paper, we present a training technique of a Recurrent Radial Basis Function neural network for fault prediction. We use the Fuzzy Min-Max technique to initialize the k-center of the RRBF neural network. The k-means algorithm is then applied to calculate the centers that minimize the mean square error of the prediction task. The performances of the k-means algorithm are then boosted by the Fuzzy Min-Max technique.

  3. Synchronization of chaotic recurrent neural networks with time-varying delays using nonlinear feedback control

    International Nuclear Information System (INIS)

    Cui Baotong; Lou Xuyang

    2009-01-01

    In this paper, a new method to synchronize two identical chaotic recurrent neural networks is proposed. Using the drive-response concept, a nonlinear feedback control law is derived to achieve the state synchronization of the two identical chaotic neural networks. Furthermore, based on the Lyapunov method, a delay independent sufficient synchronization condition in terms of linear matrix inequality (LMI) is obtained. A numerical example with graphical illustrations is given to illuminate the presented synchronization scheme

  4. Delay-slope-dependent stability results of recurrent neural networks.

    Science.gov (United States)

    Li, Tao; Zheng, Wei Xing; Lin, Chong

    2011-12-01

    By using the fact that the neuron activation functions are sector bounded and nondecreasing, this brief presents a new method, named the delay-slope-dependent method, for stability analysis of a class of recurrent neural networks with time-varying delays. This method includes more information on the slope of neuron activation functions and fewer matrix variables in the constructed Lyapunov-Krasovskii functional. Then some improved delay-dependent stability criteria with less computational burden and conservatism are obtained. Numerical examples are given to illustrate the effectiveness and the benefits of the proposed method.

  5. Recurrent Neural Network Applications for Astronomical Time Series

    Science.gov (United States)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  6. A one-layer recurrent neural network for non-smooth convex optimization subject to linear inequality constraints

    International Nuclear Information System (INIS)

    Liu, Xiaolan; Zhou, Mi

    2016-01-01

    In this paper, a one-layer recurrent network is proposed for solving a non-smooth convex optimization subject to linear inequality constraints. Compared with the existing neural networks for optimization, the proposed neural network is capable of solving more general convex optimization with linear inequality constraints. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds.

  7. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    Science.gov (United States)

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep

  8. Stability results for stochastic delayed recurrent neural networks with discrete and distributed delays

    Science.gov (United States)

    Chen, Guiling; Li, Dingshi; Shi, Lin; van Gaans, Onno; Verduyn Lunel, Sjoerd

    2018-03-01

    We present new conditions for asymptotic stability and exponential stability of a class of stochastic recurrent neural networks with discrete and distributed time varying delays. Our approach is based on the method using fixed point theory, which do not resort to any Liapunov function or Liapunov functional. Our results neither require the boundedness, monotonicity and differentiability of the activation functions nor differentiability of the time varying delays. In particular, a class of neural networks without stochastic perturbations is also considered. Examples are given to illustrate our main results.

  9. Neural basis of individualistic and collectivistic views of self.

    Science.gov (United States)

    Chiao, Joan Y; Harada, Tokiko; Komeda, Hidetsugu; Li, Zhang; Mano, Yoko; Saito, Daisuke; Parrish, Todd B; Sadato, Norihiro; Iidaka, Tetsuya

    2009-09-01

    Individualism and collectivism refer to cultural values that influence how people construe themselves and their relation to the world. Individualists perceive themselves as stable entities, autonomous from other people and their environment, while collectivists view themselves as dynamic entities, continually defined by their social context and relationships. Despite rich understanding of how individualism and collectivism influence social cognition at a behavioral level, little is known about how these cultural values modulate neural representations underlying social cognition. Using cross-cultural functional magnetic resonance imaging (fMRI), we examined whether the cultural values of individualism and collectivism modulate neural activity within medial prefrontal cortex (MPFC) during processing of general and contextual self judgments. Here, we show that neural activity within the anterior rostral portion of the MPFC during processing of general and contextual self judgments positively predicts how individualistic or collectivistic a person is across cultures. These results reveal two kinds of neural representations of self (eg, a general self and a contextual self) within MPFC and demonstrate how cultural values of individualism and collectivism shape these neural representations. 2008 Wiley-Liss, Inc.

  10. Robust sliding mode control for uncertain servo system using friction observer and recurrent fuzzy neural networks

    International Nuclear Information System (INIS)

    Han, Seong Ik; Jeong, Chan Se; Yang, Soon Yong

    2012-01-01

    A robust positioning control scheme has been developed using friction parameter observer and recurrent fuzzy neural networks based on the sliding mode control. As a dynamic friction model, the LuGre model is adopted for handling friction compensation because it has been known to capture sufficiently the properties of a nonlinear dynamic friction. A developed friction parameter observer has a simple structure and also well estimates friction parameters of the LuGre friction model. In addition, an approximation method for the system uncertainty is developed using recurrent fuzzy neural networks technology to improve the precision positioning degree. Some simulation and experiment provide the verification on the performance of a proposed robust control scheme

  11. Novel criteria for global exponential periodicity and stability of recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Song Qiankun

    2008-01-01

    In this paper, the global exponential periodicity and stability of recurrent neural networks with time-varying delays are investigated by applying the idea of vector Lyapunov function, M-matrix theory and inequality technique. We assume neither the global Lipschitz conditions on these activation functions nor the differentiability on these time-varying delays, which were needed in other papers. Several novel criteria are found to ascertain the existence, uniqueness and global exponential stability of periodic solution for recurrent neural network with time-varying delays. Moreover, the exponential convergence rate index is estimated, which depends on the system parameters. Some previous results are improved and generalized, and an example is given to show the effectiveness of our method

  12. Robust sliding mode control for uncertain servo system using friction observer and recurrent fuzzy neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seong Ik [Pusan National University, Busan (Korea, Republic of); Jeong, Chan Se; Yang, Soon Yong [University of Ulsan, Ulsan (Korea, Republic of)

    2012-04-15

    A robust positioning control scheme has been developed using friction parameter observer and recurrent fuzzy neural networks based on the sliding mode control. As a dynamic friction model, the LuGre model is adopted for handling friction compensation because it has been known to capture sufficiently the properties of a nonlinear dynamic friction. A developed friction parameter observer has a simple structure and also well estimates friction parameters of the LuGre friction model. In addition, an approximation method for the system uncertainty is developed using recurrent fuzzy neural networks technology to improve the precision positioning degree. Some simulation and experiment provide the verification on the performance of a proposed robust control scheme.

  13. Religious beliefs influence neural substrates of self-reflection in Tibetans.

    Science.gov (United States)

    Wu, Yanhong; Wang, Cheng; He, Xi; Mao, Lihua; Zhang, Li

    2010-06-01

    Previous transcultural neuroimaging studies have shown that the neural substrates of self-reflection can be shaped by different cultures. There are few studies, however, on the neural activity of self-reflection where religion is viewed as a form of cultural expression. The present study examined the self-processing of two Chinese ethnic groups (Han and Tibetan) to investigate the significant role of religion on the functional anatomy of self-representation. We replicated the previous results in Han participants with the ventral medial prefrontal cortex and left anterior cingulate cortex showing stronger activation in self-processing when compared with other-processing conditions. However, no typical self-reference pattern was identified in Tibetan participants on behavioral or neural levels. This could be explained by the minimal subjective sense of 'I-ness' in Tibetan Buddhists. Our findings lend support to the presumed role of culture and religion in shaping the neural substrate of self.

  14. A non-penalty recurrent neural network for solving a class of constrained optimization problems.

    Science.gov (United States)

    Hosseini, Alireza

    2016-01-01

    In this paper, we explain a methodology to analyze convergence of some differential inclusion-based neural networks for solving nonsmooth optimization problems. For a general differential inclusion, we show that if its right hand-side set valued map satisfies some conditions, then solution trajectory of the differential inclusion converges to optimal solution set of its corresponding in optimization problem. Based on the obtained methodology, we introduce a new recurrent neural network for solving nonsmooth optimization problems. Objective function does not need to be convex on R(n) nor does the new neural network model require any penalty parameter. We compare our new method with some penalty-based and non-penalty based models. Moreover for differentiable cases, we implement circuit diagram of the new neural network. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Online Sequence Training of Recurrent Neural Networks with Connectionist Temporal Classification

    OpenAIRE

    Hwang, Kyuyeon; Sung, Wonyong

    2015-01-01

    Connectionist temporal classification (CTC) based supervised sequence training of recurrent neural networks (RNNs) has shown great success in many machine learning areas including end-to-end speech and handwritten character recognition. For the CTC training, however, it is required to unroll (or unfold) the RNN by the length of an input sequence. This unrolling requires a lot of memory and hinders a small footprint implementation of online learning or adaptation. Furthermore, the length of tr...

  16. Self-organization, Networks, Future

    Directory of Open Access Journals (Sweden)

    T. S. Akhromeyeva

    2013-01-01

    Full Text Available This paper presents an analytical review of a conference on the great scientist, a brilliant professor, an outstanding educator Sergei Kapitsa, held in November 2012. In the focus of this forum were problems of self-organization and a paradigm of network structures. The use of networks in the context of national defense, economics, management of mass consciousness was discussed. The analysis of neural networks in technical systems, the structure of the brain, as well as in the space of knowledge, information, and behavioral strategies plays an important role. One of the conference purposes was to an online organize community in Russia and to identify the most promising directions in this field. Some of them are presented in this paper.

  17. Organized versus self-organized criticality in the abelian sandpile model

    OpenAIRE

    Fey-den Boer, AC Anne; Redig, FHJ Frank

    2005-01-01

    We define stabilizability of an infinite volume height configuration and of a probability measure on height configurations. We show that for high enough densities, a probability measure cannot be stabilized. We also show that in some sense the thermodynamic limit of the uniform measures on the recurrent configurations of the abelian sandpile model (ASM) is a maximal element of the set of stabilizable measures. In that sense the self-organized critical behavior of the ASM can be understood in ...

  18. Different-Level Simultaneous Minimization Scheme for Fault Tolerance of Redundant Manipulator Aided with Discrete-Time Recurrent Neural Network.

    Science.gov (United States)

    Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang

    2017-01-01

    By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network.

  19. Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses

    Directory of Open Access Journals (Sweden)

    Mattia Rigotti

    2010-10-01

    Full Text Available Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics, the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding. A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.

  20. The super-Turing computational power of plastic recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  1. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    DEFF Research Database (Denmark)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin

    2015-01-01

    correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking...... dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural...... mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online...

  2. Precipitation Nowcast using Deep Recurrent Neural Network

    Science.gov (United States)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2016-12-01

    An accurate precipitation nowcast (0-6 hours) with a fine temporal and spatial resolution has always been an important prerequisite for flood warning, streamflow prediction and risk management. Most of the popular approaches used for forecasting precipitation can be categorized into two groups. One type of precipitation forecast relies on numerical modeling of the physical dynamics of atmosphere and another is based on empirical and statistical regression models derived by local hydrologists or meteorologists. Given the recent advances in artificial intelligence, in this study a powerful Deep Recurrent Neural Network, termed as Long Short-Term Memory (LSTM) model, is creatively used to extract the patterns and forecast the spatial and temporal variability of Cloud Top Brightness Temperature (CTBT) observed from GOES satellite. Then, a 0-6 hours precipitation nowcast is produced using a Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN) algorithm, in which the CTBT nowcast is used as the PERSIANN algorithm's raw inputs. Two case studies over the continental U.S. have been conducted that demonstrate the improvement of proposed approach as compared to a classical Feed Forward Neural Network and a couple simple regression models. The advantages and disadvantages of the proposed method are summarized with regard to its capability of pattern recognition through time, handling of vanishing gradient during model learning, and working with sparse data. The studies show that the LSTM model performs better than other methods, and it is able to learn the temporal evolution of the precipitation events through over 1000 time lags. The uniqueness of PERSIANN's algorithm enables an alternative precipitation nowcast approach as demonstrated in this study, in which the CTBT prediction is produced and used as the inputs for generating precipitation nowcast.

  3. Self-control with spiking and non-spiking neural networks playing games.

    Science.gov (United States)

    Christodoulou, Chris; Banfield, Gaye; Cleanthous, Aristodemos

    2010-01-01

    Self-control can be defined as choosing a large delayed reward over a small immediate reward, while precommitment is the making of a choice with the specific aim of denying oneself future choices. Humans recognise that they have self-control problems and attempt to overcome them by applying precommitment. Problems in exercising self-control, suggest a conflict between cognition and motivation, which has been linked to competition between higher and lower brain functions (representing the frontal lobes and the limbic system respectively). This premise of an internal process conflict, lead to a behavioural model being proposed, based on which, we implemented a computational model for studying and explaining self-control through precommitment behaviour. Our model consists of two neural networks, initially non-spiking and then spiking ones, representing the higher and lower brain systems viewed as cooperating for the benefit of the organism. The non-spiking neural networks are of simple feed forward multilayer type with reinforcement learning, one with selective bootstrap weight update rule, which is seen as myopic, representing the lower brain and the other with the temporal difference weight update rule, which is seen as far-sighted, representing the higher brain. The spiking neural networks are implemented with leaky integrate-and-fire neurons with learning based on stochastic synaptic transmission. The differentiating element between the two brain centres in this implementation is based on the memory of past actions determined by an eligibility trace time constant. As the structure of the self-control problem can be likened to the Iterated Prisoner's Dilemma (IPD) game in that cooperation is to defection what self-control is to impulsiveness or what compromising is to insisting, we implemented the neural networks as two players, learning simultaneously but independently, competing in the IPD game. With a technique resembling the precommitment effect, whereby the

  4. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    Science.gov (United States)

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  5. Recurrent Neural Networks to Correct Satellite Image Classification Maps

    Science.gov (United States)

    Maggiori, Emmanuel; Charpiat, Guillaume; Tarabalka, Yuliya; Alliez, Pierre

    2017-09-01

    While initially devised for image categorization, convolutional neural networks (CNNs) are being increasingly used for the pixelwise semantic labeling of images. However, the proper nature of the most common CNN architectures makes them good at recognizing but poor at localizing objects precisely. This problem is magnified in the context of aerial and satellite image labeling, where a spatially fine object outlining is of paramount importance. Different iterative enhancement algorithms have been presented in the literature to progressively improve the coarse CNN outputs, seeking to sharpen object boundaries around real image edges. However, one must carefully design, choose and tune such algorithms. Instead, our goal is to directly learn the iterative process itself. For this, we formulate a generic iterative enhancement process inspired from partial differential equations, and observe that it can be expressed as a recurrent neural network (RNN). Consequently, we train such a network from manually labeled data for our enhancement task. In a series of experiments we show that our RNN effectively learns an iterative process that significantly improves the quality of satellite image classification maps.

  6. Obtaining parton distribution functions from self-organizing maps

    International Nuclear Information System (INIS)

    Honkanen, H.; Liuti, S.; Loitiere, Y.C.; Brogan, D.; Reynolds, P.

    2007-01-01

    We present an alternative algorithm to global fitting procedures to construct Parton Distribution Functions parametrizations. The proposed algorithm uses Self-Organizing Maps which at variance with the standard Neural Networks, are based on competitive-learning. Self-Organizing Maps generate a non-uniform projection from a high dimensional data space onto a low dimensional one (usually 1 or 2 dimensions) by clustering similar PDF representations together. The SOMs are trained on progressively narrower selections of data samples. The selection criterion is that of convergence towards a neighborhood of the experimental data. All available data sets on deep inelastic scattering in the kinematical region of 0.001 ≤ x ≤ 0.75, and 1 ≤ Q 2 ≤ 100 GeV 2 , with a cut on the final state invariant mass, W 2 ≥ 10 GeV 2 were implemented. The proposed fitting procedure, at variance with standard neural network approaches, allows for an increased control of the systematic bias by enabling the user to directly control the data selection procedure at various stages of the process. (author)

  7. Very deep recurrent convolutional neural network for object recognition

    Science.gov (United States)

    Brahimi, Sourour; Ben Aoun, Najib; Ben Amar, Chokri

    2017-03-01

    In recent years, Computer vision has become a very active field. This field includes methods for processing, analyzing, and understanding images. The most challenging problems in computer vision are image classification and object recognition. This paper presents a new approach for object recognition task. This approach exploits the success of the Very Deep Convolutional Neural Network for object recognition. In fact, it improves the convolutional layers by adding recurrent connections. This proposed approach was evaluated on two object recognition benchmarks: Pascal VOC 2007 and CIFAR-10. The experimental results prove the efficiency of our method in comparison with the state of the art methods.

  8. The neural sociometer: brain mechanisms underlying state self-esteem.

    Science.gov (United States)

    Eisenberger, Naomi I; Inagaki, Tristen K; Muscatell, Keely A; Byrne Haltom, Kate E; Leary, Mark R

    2011-11-01

    On the basis of the importance of social connection for survival, humans may have evolved a "sociometer"-a mechanism that translates perceptions of rejection or acceptance into state self-esteem. Here, we explored the neural underpinnings of the sociometer by examining whether neural regions responsive to rejection or acceptance were associated with state self-esteem. Participants underwent fMRI while viewing feedback words ("interesting," "boring") ostensibly chosen by another individual (confederate) to describe the participant's previously recorded interview. Participants rated their state self-esteem in response to each feedback word. Results demonstrated that greater activity in rejection-related neural regions (dorsal ACC, anterior insula) and mentalizing regions was associated with lower-state self-esteem. Additionally, participants whose self-esteem decreased from prescan to postscan versus those whose self-esteem did not showed greater medial prefrontal cortical activity, previously associated with self-referential processing, in response to negative feedback. Together, the results inform our understanding of the origin and nature of our feelings about ourselves.

  9. Self-organizing map models of language acquisition

    Science.gov (United States)

    Li, Ping; Zhao, Xiaowei

    2013-01-01

    Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic parallel distributed processing architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper, we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development. We suggest future directions in which these models can be extended, to better connect with behavioral and neural data, and to make clear predictions in testing relevant psycholinguistic theories. PMID:24312061

  10. Analysis of recurrent neural networks for short-term energy load forecasting

    Science.gov (United States)

    Di Persio, Luca; Honchar, Oleksandr

    2017-11-01

    Short-term forecasts have recently gained an increasing attention because of the rise of competitive electricity markets. In fact, short-terms forecast of possible future loads turn out to be fundamental to build efficient energy management strategies as well as to avoid energy wastage. Such type of challenges are difficult to tackle both from a theoretical and applied point of view. Latter tasks require sophisticated methods to manage multidimensional time series related to stochastic phenomena which are often highly interconnected. In the present work we first review novel approaches to energy load forecasting based on recurrent neural network, focusing our attention on long/short term memory architectures (LSTMs). Such type of artificial neural networks have been widely applied to problems dealing with sequential data such it happens, e.g., in socio-economics settings, for text recognition purposes, concerning video signals, etc., always showing their effectiveness to model complex temporal data. Moreover, we consider different novel variations of basic LSTMs, such as sequence-to-sequence approach and bidirectional LSTMs, aiming at providing effective models for energy load data. Last but not least, we test all the described algorithms on real energy load data showing not only that deep recurrent networks can be successfully applied to energy load forecasting, but also that this approach can be extended to other problems based on time series prediction.

  11. Optimal Formation of Multirobot Systems Based on a Recurrent Neural Network.

    Science.gov (United States)

    Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Yu, Junzhi; Tan, Min

    2016-02-01

    The optimal formation problem of multirobot systems is solved by a recurrent neural network in this paper. The desired formation is described by the shape theory. This theory can generate a set of feasible formations that share the same relative relation among robots. An optimal formation means that finding one formation from the feasible formation set, which has the minimum distance to the initial formation of the multirobot system. Then, the formation problem is transformed into an optimization problem. In addition, the orientation, scale, and admissible range of the formation can also be considered as the constraints in the optimization problem. Furthermore, if all robots are identical, their positions in the system are exchangeable. Then, each robot does not necessarily move to one specific position in the formation. In this case, the optimal formation problem becomes a combinational optimization problem, whose optimal solution is very hard to obtain. Inspired by the penalty method, this combinational optimization problem can be approximately transformed into a convex optimization problem. Due to the involvement of the Euclidean norm in the distance, the objective function of these optimization problems are nonsmooth. To solve these nonsmooth optimization problems efficiently, a recurrent neural network approach is employed, owing to its parallel computation ability. Finally, some simulations and experiments are given to validate the effectiveness and efficiency of the proposed optimal formation approach.

  12. Self-reported recurrent pain and medicine use behaviours among 15-year olds

    DEFF Research Database (Denmark)

    Gobina, I; Villberg, J; Villerusa, A

    2015-01-01

    -old adolescents internationally; (2) to investigate the association between recurrent pain and medicine use behaviours among boys and girls; and (3) to evaluate the consistency of these associations across countries. METHODS: The World Health Organization (WHO) collaborative international Health Behaviour......BACKGROUND: There is considerable variation in adolescent pain prevalence across epidemiological studies, with limited information on pain-related behaviours among adolescents, including medicine use. The aims of this study were (1) to examine the prevalence of recurrent pain among 15-year...... in School-aged Children 2009/2010 study collects data about self-reported aches and medicine use from 36,762 15-year-old adolescents from 22 countries/regions in Europe and the United States. Multi-level multivariate logistic regression, stratified by gender, was used to analyse the association between...

  13. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  14. Folk music style modelling by recurrent neural networks with long short term memory units

    OpenAIRE

    Sturm, Bob; Santos, João Felipe; Korshunova, Iryna

    2015-01-01

    We demonstrate two generative models created by training a recurrent neural network (RNN) with three hidden layers of long short-term memory (LSTM) units. This extends past work in numerous directions, including training deeper models with nearly 24,000 high-level transcriptions of folk tunes. We discuss our on-going work.

  15. A New Local Bipolar Autoassociative Memory Based on External Inputs of Discrete Recurrent Neural Networks With Time Delay.

    Science.gov (United States)

    Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang

    In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.

  16. Deep Recurrent Neural Networks for Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Abdulmajid Murad

    2017-11-01

    Full Text Available Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM and k-nearest neighbors (KNN. Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs and CNNs.

  17. Deep Recurrent Neural Networks for Human Activity Recognition.

    Science.gov (United States)

    Murad, Abdulmajid; Pyun, Jae-Young

    2017-11-06

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

  18. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    Science.gov (United States)

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  19. Neural processing of short-term recurrence in songbird vocal communication.

    Directory of Open Access Journals (Sweden)

    Gabriël J L Beckers

    Full Text Available BACKGROUND: Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication. METHODOLOGY/PRINCIPAL FINDINGS: We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area. CONCLUSIONS/SIGNIFICANCE: Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene.

  20. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    Science.gov (United States)

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Optimizing Markovian modeling of chaotic systems with recurrent neural networks

    International Nuclear Information System (INIS)

    Cechin, Adelmo L.; Pechmann, Denise R.; Oliveira, Luiz P.L. de

    2008-01-01

    In this paper, we propose a methodology for optimizing the modeling of an one-dimensional chaotic time series with a Markov Chain. The model is extracted from a recurrent neural network trained for the attractor reconstructed from the data set. Each state of the obtained Markov Chain is a region of the reconstructed state space where the dynamics is approximated by a specific piecewise linear map, obtained from the network. The Markov Chain represents the dynamics of the time series in its statistical essence. An application to a time series resulted from Lorenz system is included

  2. Online Signature Verification using Recurrent Neural Network and Length-normalized Path Signature

    OpenAIRE

    Lai, Songxuan; Jin, Lianwen; Yang, Weixin

    2017-01-01

    Inspired by the great success of recurrent neural networks (RNNs) in sequential modeling, we introduce a novel RNN system to improve the performance of online signature verification. The training objective is to directly minimize intra-class variations and to push the distances between skilled forgeries and genuine samples above a given threshold. By back-propagating the training signals, our RNN network produced discriminative features with desired metrics. Additionally, we propose a novel d...

  3. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    Science.gov (United States)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  4. Habituation in non-neural organisms: evidence from slime moulds

    OpenAIRE

    Boisseau, Romain P.; Vogel, David; Dussutour, Audrey

    2016-01-01

    Learning, defined as a change in behaviour evoked by experience, has hitherto been investigated almost exclusively in multicellular neural organisms. Evidence for learning in non-neural multicellular organisms is scant, and only a few unequivocal reports of learning have been described in single-celled organisms. Here we demonstrate habituation, an unmistakable form of learning, in the non-neural organism Physarum polycephalum. In our experiment, using chemotaxis as the behavioural output and...

  5. Learning and retrieval behavior in recurrent neural networks with pre-synaptic dependent homeostatic plasticity

    Science.gov (United States)

    Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.

    2017-08-01

    The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.

  6. ProLanGO: Protein Function Prediction Using Neural Machine Translation Based on a Recurrent Neural Network.

    Science.gov (United States)

    Cao, Renzhi; Freitas, Colton; Chan, Leong; Sun, Miao; Jiang, Haiqing; Chen, Zhangxin

    2017-10-17

    With the development of next generation sequencing techniques, it is fast and cheap to determine protein sequences but relatively slow and expensive to extract useful information from protein sequences because of limitations of traditional biological experimental techniques. Protein function prediction has been a long standing challenge to fill the gap between the huge amount of protein sequences and the known function. In this paper, we propose a novel method to convert the protein function problem into a language translation problem by the new proposed protein sequence language "ProLan" to the protein function language "GOLan", and build a neural machine translation model based on recurrent neural networks to translate "ProLan" language to "GOLan" language. We blindly tested our method by attending the latest third Critical Assessment of Function Annotation (CAFA 3) in 2016, and also evaluate the performance of our methods on selected proteins whose function was released after CAFA competition. The good performance on the training and testing datasets demonstrates that our new proposed method is a promising direction for protein function prediction. In summary, we first time propose a method which converts the protein function prediction problem to a language translation problem and applies a neural machine translation model for protein function prediction.

  7. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  8. Complex Dynamical Network Control for Trajectory Tracking Using Delayed Recurrent Neural Networks

    Directory of Open Access Journals (Sweden)

    Jose P. Perez

    2014-01-01

    Full Text Available In this paper, the problem of trajectory tracking is studied. Based on the V-stability and Lyapunov theory, a control law that achieves the global asymptotic stability of the tracking error between a delayed recurrent neural network and a complex dynamical network is obtained. To illustrate the analytic results, we present a tracking simulation of a dynamical network with each node being just one Lorenz’s dynamical system and three identical Chen’s dynamical systems.

  9. Deep Recurrent Neural Networks for Supernovae Classification

    Science.gov (United States)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  10. Reward-based training of recurrent neural networks for cognitive and value-based tasks.

    Science.gov (United States)

    Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing

    2017-01-13

    Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal's internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task.

  11. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    Science.gov (United States)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  12. Improving protein disorder prediction by deep bidirectional long short-term memory recurrent neural networks.

    Science.gov (United States)

    Hanson, Jack; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi

    2017-03-01

    Capturing long-range interactions between structural but not sequence neighbors of proteins is a long-standing challenging problem in bioinformatics. Recently, long short-term memory (LSTM) networks have significantly improved the accuracy of speech and image classification problems by remembering useful past information in long sequential events. Here, we have implemented deep bidirectional LSTM recurrent neural networks in the problem of protein intrinsic disorder prediction. The new method, named SPOT-Disorder, has steadily improved over a similar method using a traditional, window-based neural network (SPINE-D) in all datasets tested without separate training on short and long disordered regions. Independent tests on four other datasets including the datasets from critical assessment of structure prediction (CASP) techniques and >10 000 annotated proteins from MobiDB, confirmed SPOT-Disorder as one of the best methods in disorder prediction. Moreover, initial studies indicate that the method is more accurate in predicting functional sites in disordered regions. These results highlight the usefulness combining LSTM with deep bidirectional recurrent neural networks in capturing non-local, long-range interactions for bioinformatics applications. SPOT-disorder is available as a web server and as a standalone program at: http://sparks-lab.org/server/SPOT-disorder/index.php . j.hanson@griffith.edu.au or yuedong.yang@griffith.edu.au or yaoqi.zhou@griffith.edu.au. Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  13. Exponential stability of delayed recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Wang Zidong; Liu Yurong; Yu Li; Liu Xiaohui

    2006-01-01

    In this Letter, the global exponential stability analysis problem is considered for a class of recurrent neural networks (RNNs) with time delays and Markovian jumping parameters. The jumping parameters considered here are generated from a continuous-time discrete-state homogeneous Markov process, which are governed by a Markov process with discrete and finite state space. The purpose of the problem addressed is to derive some easy-to-test conditions such that the dynamics of the neural network is stochastically exponentially stable in the mean square, independent of the time delay. By employing a new Lyapunov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish the desired sufficient conditions, and therefore the global exponential stability in the mean square for the delayed RNNs can be easily checked by utilizing the numerically efficient Matlab LMI toolbox, and no tuning of parameters is required. A numerical example is exploited to show the usefulness of the derived LMI-based stability conditions

  14. Control of beam halo-chaos using neural network self-adaptation method

    International Nuclear Information System (INIS)

    Fang Jinqing; Huang Guoxian; Luo Xiaoshu

    2004-11-01

    Taking the advantages of neural network control method for nonlinear complex systems, control of beam halo-chaos in the periodic focusing channels (network) of high intensity accelerators is studied by feed-forward back-propagating neural network self-adaptation method. The envelope radius of high-intensity proton beam is reached to the matching beam radius by suitably selecting the control structure of neural network and the linear feedback coefficient, adjusted the right-coefficient of neural network. The beam halo-chaos is obviously suppressed and shaking size is much largely reduced after the neural network self-adaptation control is applied. (authors)

  15. Stochastic exponential stability of the delayed reaction-diffusion recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Wang Linshan; Zhang Zhe; Wang Yangfan

    2008-01-01

    Some criteria for the global stochastic exponential stability of the delayed reaction-diffusion recurrent neural networks with Markovian jumping parameters are presented. The jumping parameters considered here are generated from a continuous-time discrete-state homogeneous Markov process, which are governed by a Markov process with discrete and finite state space. By employing a new Lyapunov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish some easy-to-test criteria of global exponential stability in the mean square for the stochastic neural networks. The criteria are computationally efficient, since they are in the forms of some linear matrix inequalities

  16. Forecasting energy market indices with recurrent neural networks: Case study of crude oil price fluctuations

    International Nuclear Information System (INIS)

    Wang, Jie; Wang, Jun

    2016-01-01

    In an attempt to improve the forecasting accuracy of crude oil price fluctuations, a new neural network architecture is established in this work which combines Multilayer perception and ERNN (Elman recurrent neural networks) with stochastic time effective function. ERNN is a time-varying predictive control system and is developed with the ability to keep memory of recent events in order to predict future output. The stochastic time effective function represents that the recent information has a stronger effect for the investors than the old information. With the established model the empirical research has a good performance in testing the predictive effects on four different time series indices. Compared to other models, the present model is possible to evaluate data from 1990s to today with extreme accuracy and speedy. The applied CID (complexity invariant distance) analysis and multiscale CID analysis, are provided as the new useful measures to evaluate a better predicting ability of the proposed model than other traditional models. - Highlights: • A new forecasting model is developed by a random Elman recurrent neural network. • The forecasting accuracy of crude oil price fluctuations is improved by the model. • The forecasting results of the proposed model are more accurate than compared models. • Two new distance analysis methods are applied to confirm the predicting results.

  17. Some new results for recurrent neural networks with varying-time coefficients and delays

    International Nuclear Information System (INIS)

    Jiang Haijun; Teng Zhidong

    2005-01-01

    In this Letter, we consider the recurrent neural networks with varying-time coefficients and delays. By constructing new Lyapunov functional, introducing ingeniously many real parameters and applying the technique of Young inequality, we establish a series of criteria on the boundedness, global exponential stability and the existence of periodic solutions. In these criteria, we do not require that the response functions are differentiable, bounded and monotone nondecreasing. Some previous works are improved and extended

  18. DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks

    OpenAIRE

    Yin, Zi; Chang, Keng-hao; Zhang, Ruofei

    2017-01-01

    Information extraction and user intention identification are central topics in modern query understanding and recommendation systems. In this paper, we propose DeepProbe, a generic information-directed interaction framework which is built around an attention-based sequence to sequence (seq2seq) recurrent neural network. DeepProbe can rephrase, evaluate, and even actively ask questions, leveraging the generative ability and likelihood estimation made possible by seq2seq models. DeepProbe makes...

  19. Identification of serial number on bank card using recurrent neural network

    Science.gov (United States)

    Liu, Li; Huang, Linlin; Xue, Jian

    2018-04-01

    Identification of serial number on bank card has many applications. Due to the different number printing mode, complex background, distortion in shape, etc., it is quite challenging to achieve high identification accuracy. In this paper, we propose a method using Normalization-Cooperated Gradient Feature (NCGF) and Recurrent Neural Network (RNN) based on Long Short-Term Memory (LSTM) for serial number identification. The NCGF maps the gradient direction elements of original image to direction planes such that the RNN with direction planes as input can recognize numbers more accurately. Taking the advantages of NCGF and RNN, we get 90%digit string recognition accuracy.

  20. Self-organizing neural networks for automatic detection and classification of contrast-enhancing lesions in dynamic MR-mammography; Selbstorganisierende neuronale Netze zur automatischen Detektion und Klassifikation von Kontrast(mittel)-verstaerkten Laesionen in der dynamischen MR-Mammographie

    Energy Technology Data Exchange (ETDEWEB)

    Vomweg, T.W.; Teifke, A.; Kauczor, H.U.; Achenbach, T.; Rieker, O.; Schreiber, W.G.; Heitmann, K.R.; Beier, T.; Thelen, M. [Klinik und Poliklinik fuer Radiologie, Klinikum der Univ. Mainz (Germany)

    2005-05-01

    Purpose: Investigation and statistical evaluation of 'Self-Organizing Maps', a special type of neural networks in the field of artificial intelligence, classifying contrast enhancing lesions in dynamic MR-mammography. Material and Methods: 176 investigations with proven histology after core biopsy or operation were randomly divided into two groups. Several Self-Organizing Maps were trained by investigations of the first group to detect and classify contrast enhancing lesions in dynamic MR-mammography. Each single pixel's signal/time curve of all patients within the second group was analyzed by the Self-Organizing Maps. The likelihood of malignancy was visualized by color overlays on the MR-images. At last assessment of contrast-enhancing lesions by each different network was rated visually and evaluated statistically. Results: A well balanced neural network achieved a sensitivity of 90.5% and a specificity of 72.2% in predicting malignancy of 88 enhancing lesions. Detailed analysis of false-positive results revealed that every second fibroadenoma showed a 'typical malignant' signal/time curve without any chance to differentiate between fibroadenomas and malignant tissue regarding contrast enhancement alone; but this special group of lesions was represented by a well-defined area of the Self-Organizing Map. Discussion: Self-Organizing Maps are capable of classifying a dynamic signal/time curve as 'typical benign' or 'typical malignant'. Therefore, they can be used as second opinion. In view of the now known localization of fibroadenomas enhancing like malignant tumors at the Self-Organizing Map, these lesions could be passed to further analysis by additional post-processing elements (e.g., based on T2-weighted series or morphology analysis) in the future. (orig.)

  1. Neural correlates of processing "self-conscious" vs. "basic" emotions.

    Science.gov (United States)

    Gilead, Michael; Katzir, Maayan; Eyal, Tal; Liberman, Nira

    2016-01-29

    Self-conscious emotions are prevalent in our daily lives and play an important role in both normal and pathological behavior. Despite their immense significance, the neural substrates that are involved in the processing of such emotions are surprisingly under-studied. In light of this, we conducted an fMRI study in which participants thought of various personal events which elicited feelings of negative and positive self-conscious (i.e., guilt, pride) or basic (i.e., anger, joy) emotions. We performed a conjunction analysis to investigate the neural correlates associated with processing events that are related to self-conscious vs. basic emotions, irrespective of valence. The results show that processing self-conscious emotions resulted in activation within frontal areas associated with self-processing and self-control, namely, the mPFC extending to the dACC, and within the lateral-dorsal prefrontal cortex. Processing basic emotions resulted in activation throughout relatively phylogenetically-ancient regions of the cortex, namely in visual and tactile processing areas and in the insular cortex. Furthermore, self-conscious emotions differentially activated the mPFC such that the negative self-conscious emotion (guilt) was associated with a more dorsal activation, and the positive self-conscious emotion (pride) was associated with a more ventral activation. We discuss how these results shed light on the nature of mental representations and neural systems involved in self-reflective and affective processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. 11th Workshop on Self-Organizing Maps

    CERN Document Server

    Mendenhall, Michael; O'Driscoll, Patrick

    2016-01-01

    This book contains the articles from the international conference 11th Workshop on Self-Organizing Maps 2016 (WSOM 2016), held at Rice University in Houston, Texas, 6-8 January 2016. WSOM is a biennial international conference series starting with WSOM'97 in Helsinki, Finland, under the guidance and direction of Professor Tuevo Kohonen (Emeritus Professor, Academy of Finland). WSOM brings together the state-of-the-art theory and applications in Competitive Learning Neural Networks: SOMs, LVQs and related paradigms of unsupervised and supervised vector quantization. The current proceedings present the expert body of knowledge of 93 authors from 15 countries in 31 peer reviewed contributions. It includes papers and abstracts from the WSOM 2016 invited speakers representing leading researchers in the theory and real-world applications of Self-Organizing Maps and Learning Vector Quantization: Professor Marie Cottrell (Universite Paris 1 Pantheon Sorbonne, France), Professor Pablo Estevez (University of Chile and ...

  3. Global robust exponential stability analysis for interval recurrent neural networks

    International Nuclear Information System (INIS)

    Xu Shengyuan; Lam, James; Ho, Daniel W.C.; Zou Yun

    2004-01-01

    This Letter investigates the problem of robust global exponential stability analysis for interval recurrent neural networks (RNNs) via the linear matrix inequality (LMI) approach. The values of the time-invariant uncertain parameters are assumed to be bounded within given compact sets. An improved condition for the existence of a unique equilibrium point and its global exponential stability of RNNs with known parameters is proposed. Based on this, a sufficient condition for the global robust exponential stability for interval RNNs is obtained. Both of the conditions are expressed in terms of LMIs, which can be checked easily by various recently developed convex optimization algorithms. Examples are provided to demonstrate the reduced conservatism of the proposed exponential stability condition

  4. Recurrent Neural Networks for Multivariate Time Series with Missing Values.

    Science.gov (United States)

    Che, Zhengping; Purushotham, Sanjay; Cho, Kyunghyun; Sontag, David; Liu, Yan

    2018-04-17

    Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.

  5. Trait self-esteem and neural activities related to self-evaluation and social feedback

    Science.gov (United States)

    Yang, Juan; Xu, Xiaofan; Chen, Yu; Shi, Zhenhao; Han, Shihui

    2016-01-01

    Self-esteem has been associated with neural responses to self-reflection and attitude toward social feedback but in different brain regions. The distinct associations might arise from different tasks or task-related attitudes in the previous studies. The current study aimed to clarify these by investigating the association between self-esteem and neural responses to evaluation of one’s own personality traits and of others’ opinion about one’s own personality traits. We scanned 25 college students using functional MRI during evaluation of oneself or evaluation of social feedback. Trait self-esteem was measured using the Rosenberg self-esteem scale after scanning. Whole-brain regression analyses revealed that trait self-esteem was associated with the bilateral orbitofrontal activity during evaluation of one’s own positive traits but with activities in the medial prefrontal cortex, posterior cingulate, and occipital cortices during evaluation of positive social feedback. Our findings suggest that trait self-esteem modulates the degree of both affective processes in the orbitofrontal cortex during self-reflection and cognitive processes in the medial prefrontal cortex during evaluation of social feedback. PMID:26842975

  6. Trait self-esteem and neural activities related to self-evaluation and social feedback.

    Science.gov (United States)

    Yang, Juan; Xu, Xiaofan; Chen, Yu; Shi, Zhenhao; Han, Shihui

    2016-02-04

    Self-esteem has been associated with neural responses to self-reflection and attitude toward social feedback but in different brain regions. The distinct associations might arise from different tasks or task-related attitudes in the previous studies. The current study aimed to clarify these by investigating the association between self-esteem and neural responses to evaluation of one's own personality traits and of others' opinion about one's own personality traits. We scanned 25 college students using functional MRI during evaluation of oneself or evaluation of social feedback. Trait self-esteem was measured using the Rosenberg self-esteem scale after scanning. Whole-brain regression analyses revealed that trait self-esteem was associated with the bilateral orbitofrontal activity during evaluation of one's own positive traits but with activities in the medial prefrontal cortex, posterior cingulate, and occipital cortices during evaluation of positive social feedback. Our findings suggest that trait self-esteem modulates the degree of both affective processes in the orbitofrontal cortex during self-reflection and cognitive processes in the medial prefrontal cortex during evaluation of social feedback.

  7. Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Panda, Priyadarshini; Roy, Kaushik

    2017-01-01

    Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations.

  8. Using deep recurrent neural network for direct beam solar irradiance cloud screening

    Science.gov (United States)

    Chen, Maosi; Davis, John M.; Liu, Chaoshun; Sun, Zhibin; Zempila, Melina Maria; Gao, Wei

    2017-09-01

    Cloud screening is an essential procedure for in-situ calibration and atmospheric properties retrieval on (UV-)MultiFilter Rotating Shadowband Radiometer [(UV-)MFRSR]. Previous study has explored a cloud screening algorithm for direct-beam (UV-)MFRSR voltage measurements based on the stability assumption on a long time period (typically a half day or a whole day). To design such an algorithm requires in-depth understanding of radiative transfer and delicate data manipulation. Recent rapid developments on deep neural network and computation hardware have opened a window for modeling complicated End-to-End systems with a standardized strategy. In this study, a multi-layer dynamic bidirectional recurrent neural network is built for determining the cloudiness on each time point with a 17-year training dataset and tested with another 1-year dataset. The dataset is the daily 3-minute cosine corrected voltages, airmasses, and the corresponding cloud/clear-sky labels at two stations of the USDA UV-B Monitoring and Research Program. The results show that the optimized neural network model (3-layer, 250 hidden units, and 80 epochs of training) has an overall test accuracy of 97.87% (97.56% for the Oklahoma site and 98.16% for the Hawaii site). Generally, the neural network model grasps the key concept of the original model to use data in the entire day rather than short nearby measurements to perform cloud screening. A scrutiny of the logits layer suggests that the neural network model automatically learns a way to calculate a quantity similar to total optical depth and finds an appropriate threshold for cloud screening.

  9. Self Organization in Compensated Semiconductors

    Science.gov (United States)

    Berezin, Alexander A.

    2004-03-01

    In partially compensated semiconductor (PCS) Fermi level is pinned to donor sub-band. Due to positional randomness and almost isoenergetic hoppings, donor-spanned electronic subsystem in PCS forms fluid-like highly mobile collective state. This makes PCS playground for pattern formation, self-organization, complexity emergence, electronic neural networks, and perhaps even for origins of life, bioevolution and consciousness. Through effects of impact and/or Auger ionization of donor sites, whole PCS may collapse (spinodal decomposition) into microblocks potentially capable of replication and protobiological activity (DNA analogue). Electronic screening effects may act in RNA fashion by introducing additional length scale(s) to system. Spontaneous quantum computing on charged/neutral sites becomes potential generator of informationally loaded microstructures akin to "Carl Sagan Effect" (hidden messages in Pi in his "Contact") or informational self-organization of "Library of Babel" of J.L. Borges. Even general relativity effects at Planck scale (R.Penrose) may affect the dynamics through (e.g.) isotopic variations of atomic mass and local density (A.A.Berezin, 1992). Thus, PCS can serve as toy model (experimental and computational) at interface of physics and life sciences.

  10. Toward a new task assignment and path evolution (TAPE) for missile defense system (MDS) using intelligent adaptive SOM with recurrent neural networks (RNNs).

    Science.gov (United States)

    Wang, Chi-Hsu; Chen, Chun-Yao; Hung, Kun-Neng

    2015-06-01

    In this paper, a new adaptive self-organizing map (SOM) with recurrent neural network (RNN) controller is proposed for task assignment and path evolution of missile defense system (MDS). We address the problem of N agents (defending missiles) and D targets (incoming missiles) in MDS. A new RNN controller is designed to force an agent (or defending missile) toward a target (or incoming missile), and a monitoring controller is also designed to reduce the error between RNN controller and ideal controller. A new SOM with RNN controller is then designed to dispatch agents to their corresponding targets by minimizing total damaging cost. This is actually an important application of the multiagent system. The SOM with RNN controller is the main controller. After task assignment, the weighting factors of our new SOM with RNN controller are activated to dispatch the agents toward their corresponding targets. Using the Lyapunov constraints, the weighting factors for the proposed SOM with RNN controller are updated to guarantee the stability of the path evolution (or planning) system. Excellent simulations are obtained using this new approach for MDS, which show that our RNN has the lowest average miss distance among the several techniques.

  11. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    OpenAIRE

    Naikwad, S. N.; Dudul, S. V.

    2009-01-01

    A focused time lagged recurrent neural network (FTLR NN) with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes tempora...

  12. Fault diagnosis of rolling bearings with recurrent neural network-based autoencoders.

    Science.gov (United States)

    Liu, Han; Zhou, Jianzhong; Zheng, Yang; Jiang, Wei; Zhang, Yuncheng

    2018-04-19

    As the rolling bearings being the key part of rotary machine, its healthy condition is quite important for safety production. Fault diagnosis of rolling bearing has been research focus for the sake of improving the economic efficiency and guaranteeing the operation security. However, the collected signals are mixed with ambient noise during the operation of rotary machine, which brings great challenge to the exact diagnosis results. Using signals collected from multiple sensors can avoid the loss of local information and extract more helpful characteristics. Recurrent Neural Networks (RNN) is a type of artificial neural network which can deal with multiple time sequence data. The capacity of RNN has been proved outstanding for catching time relevance about time sequence data. This paper proposed a novel method for bearing fault diagnosis with RNN in the form of an autoencoder. In this approach, multiple vibration value of the rolling bearings of the next period are predicted from the previous period by means of Gated Recurrent Unit (GRU)-based denoising autoencoder. These GRU-based non-linear predictive denoising autoencoders (GRU-NP-DAEs) are trained with strong generalization ability for each different fault pattern. Then for the given input data, the reconstruction errors between the next period data and the output data generated by different GRU-NP-DAEs are used to detect anomalous conditions and classify fault type. Classic rotating machinery datasets have been employed to testify the effectiveness of the proposed diagnosis method and its preponderance over some state-of-the-art methods. The experiment results indicate that the proposed method achieves satisfactory performance with strong robustness and high classification accuracy. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Ordination of self-organizing feature map neural networks and its application to the study of plant communities

    Institute of Scientific and Technical Information of China (English)

    Jintun ZHANG; Dongping MENG; Yuexiang XI

    2009-01-01

    A self-organizing feature map (SOFM) neural network is a powerful tool in analyzing and solving complex, non-linear problems. According to its features, a SOFM is entirely compatible with ordination studies of plant communities. In our present work, mathematical principles, and ordination techniques and procedures are introduced. A SOFM ordination was applied to the study of plant communities in the middle of the Taihang mountains. The ordination was carried out by using the NNTool box in MATLAB. The results of 68 quadrats of plant communities were distributed in SOFM space. The ordination axes showed the ecological gradients clearly and provided the relationships between communities with ecological meaning. The results are consistent with the reality of vegetation in the study area. This suggests that SOFM ordination is an effective technique in plant ecology. During ordination procedures, it is easy to carry out clustering of communities and so it is beneficial for combining classification and ordination in vegetation studies.

  14. Identification of Jets Containing b-Hadrons with Recurrent Neural Networks at the ATLAS Experiment

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    A novel b-jet identification algorithm is constructed with a Recurrent Neural Network (RNN) at the ATLAS Experiment. This talk presents the expected performance of the RNN based b-tagging in simulated $t \\bar t$ events. The RNN based b-tagging processes properties of tracks associated to jets which are represented in sequences. In contrast to traditional impact-parameter-based b-tagging algorithms which assume the tracks of jets are independent from each other, RNN based b-tagging can exploit the spatial and kinematic correlations of tracks which are initiated from the same b-hadrons. The neural network nature of the tagging algorithm also allows the flexibility of extending input features to include more track properties than can be effectively used in traditional algorithms.

  15. Self-Organizing Robots

    CERN Document Server

    Murata, Satoshi

    2012-01-01

    It is man’s ongoing hope that a machine could somehow adapt to its environment by reorganizing itself. This is what the notion of self-organizing robots is based on. The theme of this book is to examine the feasibility of creating such robots within the limitations of current mechanical engineering. The topics comprise the following aspects of such a pursuit: the philosophy of design of self-organizing mechanical systems; self-organization in biological systems; the history of self-organizing mechanical systems; a case study of a self-assembling/self-repairing system as an autonomous distributed system; a self-organizing robot that can create its own shape and robotic motion; implementation and instrumentation of self-organizing robots; and the future of self-organizing robots. All topics are illustrated with many up-to-date examples, including those from the authors’ own work. The book does not require advanced knowledge of mathematics to be understood, and will be of great benefit to students in the rob...

  16. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Alireza Alemi

    2015-08-01

    Full Text Available Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the

  17. Biological oscillations for learning walking coordination: dynamic recurrent neural network functionally models physiological central pattern generator.

    Science.gov (United States)

    Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy

    2013-01-01

    The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.

  18. Hierarchical organization versus self-organization

    OpenAIRE

    Busseniers, Evo

    2014-01-01

    In this paper we try to define the difference between hierarchical organization and self-organization. Organization is defined as a structure with a function. So we can define the difference between hierarchical organization and self-organization both on the structure as on the function. In the next two chapters these two definitions are given. For the structure we will use some existing definitions in graph theory, for the function we will use existing theory on (self-)organization. In the t...

  19. A recurrent neural network for adaptive beamforming and array correction.

    Science.gov (United States)

    Che, Hangjun; Li, Chuandong; He, Xing; Huang, Tingwen

    2016-08-01

    In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Self-reported empathy and neural activity during action imitation and observation in schizophrenia

    Directory of Open Access Journals (Sweden)

    William P. Horan

    2014-01-01

    Conclusions: Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy.

  1. From phonemes to images : levels of representation in a recurrent neural model of visually-grounded language learning

    NARCIS (Netherlands)

    Gelderloos, L.J.; Chrupala, Grzegorz

    2016-01-01

    We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover

  2. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Using recurrent neural network models for early detection of heart failure onset.

    Science.gov (United States)

    Choi, Edward; Schuetz, Andy; Stewart, Walter F; Sun, Jimeng

    2017-03-01

    We explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality. Data were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches. Using a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP). Deep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  4. A neural model of figure-ground organization.

    Science.gov (United States)

    Craft, Edward; Schütze, Hartmut; Niebur, Ernst; von der Heydt, Rüdiger

    2007-06-01

    Psychophysical studies suggest that figure-ground organization is a largely autonomous process that guides--and thus precedes--allocation of attention and object recognition. The discovery of border-ownership representation in single neurons of early visual cortex has confirmed this view. Recent theoretical studies have demonstrated that border-ownership assignment can be modeled as a process of self-organization by lateral interactions within V2 cortex. However, the mechanism proposed relies on propagation of signals through horizontal fibers, which would result in increasing delays of the border-ownership signal with increasing size of the visual stimulus, in contradiction with experimental findings. It also remains unclear how the resulting border-ownership representation would interact with attention mechanisms to guide further processing. Here we present a model of border-ownership coding based on dedicated neural circuits for contour grouping that produce border-ownership assignment and also provide handles for mechanisms of selective attention. The results are consistent with neurophysiological and psychophysical findings. The model makes predictions about the hypothetical grouping circuits and the role of feedback between cortical areas.

  5. Engine cylinder pressure reconstruction using crank kinematics and recurrently-trained neural networks

    Science.gov (United States)

    Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.

    2017-02-01

    A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.

  6. Self-Organized Complexity and Coherent Infomax from the Viewpoint of Jaynes’s Probability Theory

    Directory of Open Access Journals (Sweden)

    William A. Phillips

    2012-01-01

    Full Text Available This paper discusses concepts of self-organized complexity and the theory of Coherent Infomax in the light of Jaynes’s probability theory. Coherent Infomax, shows, in principle, how adaptively self-organized complexity can be preserved and improved by using probabilistic inference that is context-sensitive. It argues that neural systems do this by combining local reliability with flexible, holistic, context-sensitivity. Jaynes argued that the logic of probabilistic inference shows it to be based upon Bayesian and Maximum Entropy methods or special cases of them. He presented his probability theory as the logic of science; here it is considered as the logic of life. It is concluded that the theory of Coherent Infomax specifies a general objective for probabilistic inference, and that contextual interactions in neural systems perform functions required of the scientist within Jaynes’s theory.

  7. Centralized and decentralized global outer-synchronization of asymmetric recurrent time-varying neural network by data-sampling.

    Science.gov (United States)

    Lu, Wenlian; Zheng, Ren; Chen, Tianping

    2016-03-01

    In this paper, we discuss outer-synchronization of the asymmetrically connected recurrent time-varying neural networks. By using both centralized and decentralized discretization data sampling principles, we derive several sufficient conditions based on three vector norms to guarantee that the difference of any two trajectories starting from different initial values of the neural network converges to zero. The lower bounds of the common time intervals between data samples in centralized and decentralized principles are proved to be positive, which guarantees exclusion of Zeno behavior. A numerical example is provided to illustrate the efficiency of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. MACHINE LEARNING FOR THE SELF-ORGANIZATION OF DISTRIBUTED SYSTEMS IN ECONOMIC APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2017-03-01

    Full Text Available In this paper, an application of machine learning to the problem of self-organization of distributed systems has been discussed with regard to economic applications, with particular emphasis on supervised neural network learning to predict stock investments and some ratings of companies. In addition, genetic programming can play an important role in the preparation and testing of several financial information systems. For this reason, machine learning applications have been discussed because some software applications can be automatically constructed by genetic programming. To obtain a competitive advantage, machine learning can be used for the management of self-organizing cloud computing systems performing calculations for business. Also the use of selected economic self-organizing distributed systems has been described, including some testing methods of predicting borrower reliability. Finally, some conclusions and directions for further research have been proposed.

  9. Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks.

    Science.gov (United States)

    Cheng, Long; Hou, Zeng-Guang; Lin, Yingzi; Tan, Min; Zhang, Wenjun Chris; Wu, Fang-Xiang

    2011-05-01

    A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.

  10. Neural basis of self and other representation in autism: an FMRI study of self-face recognition.

    Directory of Open Access Journals (Sweden)

    Lucina Q Uddin

    Full Text Available Autism is a developmental disorder characterized by decreased interest and engagement in social interactions and by enhanced self-focus. While previous theoretical approaches to understanding autism have emphasized social impairments and altered interpersonal interactions, there is a recent shift towards understanding the nature of the representation of the self in individuals with autism spectrum disorders (ASD. Still, the neural mechanisms subserving self-representations in ASD are relatively unexplored.We used event-related fMRI to investigate brain responsiveness to images of the subjects' own face and to faces of others. Children with ASD and typically developing (TD children viewed randomly presented digital morphs between their own face and a gender-matched other face, and made "self/other" judgments. Both groups of children activated a right premotor/prefrontal system when identifying images containing a greater percentage of the self face. However, while TD children showed activation of this system during both self- and other-processing, children with ASD only recruited this system while viewing images containing mostly their own face.This functional dissociation between the representation of self versus others points to a potential neural substrate for the characteristic self-focus and decreased social understanding exhibited by these individuals, and suggests that individuals with ASD lack the shared neural representations for self and others that TD children and adults possess and may use to understand others.

  11. Habituation in non-neural organisms: evidence from slime moulds.

    Science.gov (United States)

    Boisseau, Romain P; Vogel, David; Dussutour, Audrey

    2016-04-27

    Learning, defined as a change in behaviour evoked by experience, has hitherto been investigated almost exclusively in multicellular neural organisms. Evidence for learning in non-neural multicellular organisms is scant, and only a few unequivocal reports of learning have been described in single-celled organisms. Here we demonstrate habituation, an unmistakable form of learning, in the non-neural organism Physarum polycephalum In our experiment, using chemotaxis as the behavioural output and quinine or caffeine as the stimulus, we showed that P. polycephalum learnt to ignore quinine or caffeine when the stimuli were repeated, but responded again when the stimulus was withheld for a certain time. Our results meet the principle criteria that have been used to demonstrate habituation: responsiveness decline and spontaneous recovery. To distinguish habituation from sensory adaptation or motor fatigue, we also show stimulus specificity. Our results point to the diversity of organisms lacking neurons, which likely display a hitherto unrecognized capacity for learning, and suggest that slime moulds may be an ideal model system in which to investigate fundamental mechanisms underlying learning processes. Besides, documenting learning in non-neural organisms such as slime moulds is centrally important to a comprehensive, phylogenetic understanding of when and where in the tree of life the earliest manifestations of learning evolved. © 2016 The Author(s).

  12. Automatic construction of a recurrent neural network based classifier for vehicle passage detection

    Science.gov (United States)

    Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur

    2017-03-01

    Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.

  13. Self-Organizing Neural Integration of Pose-Motion Features for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    German Ignacio Parisi

    2015-06-01

    Full Text Available The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented towards human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR networks that obtain progressively generalized representations of sensory inputs and learn inherent spatiotemporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best 21 results for a public benchmark of domestic daily actions.

  14. Recurrent Neural Network Model for Constructive Peptide Design.

    Science.gov (United States)

    Müller, Alex T; Hiss, Jan A; Schneider, Gisbert

    2018-02-26

    We present a generative long short-term memory (LSTM) recurrent neural network (RNN) for combinatorial de novo peptide design. RNN models capture patterns in sequential data and generate new data instances from the learned context. Amino acid sequences represent a suitable input for these machine-learning models. Generative models trained on peptide sequences could therefore facilitate the design of bespoke peptide libraries. We trained RNNs with LSTM units on pattern recognition of helical antimicrobial peptides and used the resulting model for de novo sequence generation. Of these sequences, 82% were predicted to be active antimicrobial peptides compared to 65% of randomly sampled sequences with the same amino acid distribution as the training set. The generated sequences also lie closer to the training data than manually designed amphipathic helices. The results of this study showcase the ability of LSTM RNNs to construct new amino acid sequences within the applicability domain of the model and motivate their prospective application to peptide and protein design without the need for the exhaustive enumeration of sequence libraries.

  15. An automatic microseismic or acoustic emission arrival identification scheme with deep recurrent neural networks

    Science.gov (United States)

    Zheng, Jing; Lu, Jiren; Peng, Suping; Jiang, Tianqi

    2018-02-01

    The conventional arrival pick-up algorithms cannot avoid the manual modification of the parameters for the simultaneous identification of multiple events under different signal-to-noise ratios (SNRs). Therefore, in order to automatically obtain the arrivals of multiple events with high precision under different SNRs, in this study an algorithm was proposed which had the ability to pick up the arrival of microseismic or acoustic emission events based on deep recurrent neural networks. The arrival identification was performed using two important steps, which included a training phase and a testing phase. The training process was mathematically modelled by deep recurrent neural networks using Long Short-Term Memory architecture. During the testing phase, the learned weights were utilized to identify the arrivals through the microseismic/acoustic emission data sets. The data sets were obtained by rock physics experiments of the acoustic emission. In order to obtain the data sets under different SNRs, this study added random noise to the raw experiments' data sets. The results showed that the outcome of the proposed method was able to attain an above 80 per cent hit-rate at SNR 0 dB, and an approximately 70 per cent hit-rate at SNR -5 dB, with an absolute error in 10 sampling points. These results indicated that the proposed method had high selection precision and robustness.

  16. Consciousness as a phenomenon in the operational architectonics of brain organization: Criticality and self-organization considerations

    International Nuclear Information System (INIS)

    Fingelkurts, Andrew A.; Fingelkurts, Alexander A.; Neves, Carlos F.H.

    2013-01-01

    In this paper we aim to show that phenomenal consciousness is realized by a particular level of brain operational organization and that understanding human consciousness requires a description of the laws of the immediately underlying neural collective phenomena, the nested hierarchy of electromagnetic fields of brain activity – operational architectonics. We argue that the subjective mental reality and the objective neurobiological reality, although seemingly worlds apart, are intimately connected along a unified metastable continuum and are both guided by the universal laws of the physical world such as criticality, self-organization and emergence

  17. Drawing and Recognizing Chinese Characters with Recurrent Neural Network.

    Science.gov (United States)

    Zhang, Xu-Yao; Yin, Fei; Zhang, Yan-Ming; Liu, Cheng-Lin; Bengio, Yoshua

    2018-04-01

    Recent deep learning based approaches have achieved great success on handwriting recognition. Chinese characters are among the most widely adopted writing systems in the world. Previous research has mainly focused on recognizing handwritten Chinese characters. However, recognition is only one aspect for understanding a language, another challenging and interesting task is to teach a machine to automatically write (pictographic) Chinese characters. In this paper, we propose a framework by using the recurrent neural network (RNN) as both a discriminative model for recognizing Chinese characters and a generative model for drawing (generating) Chinese characters. To recognize Chinese characters, previous methods usually adopt the convolutional neural network (CNN) models which require transforming the online handwriting trajectory into image-like representations. Instead, our RNN based approach is an end-to-end system which directly deals with the sequential structure and does not require any domain-specific knowledge. With the RNN system (combining an LSTM and GRU), state-of-the-art performance can be achieved on the ICDAR-2013 competition database. Furthermore, under the RNN framework, a conditional generative model with character embedding is proposed for automatically drawing recognizable Chinese characters. The generated characters (in vector format) are human-readable and also can be recognized by the discriminative RNN model with high accuracy. Experimental results verify the effectiveness of using RNNs as both generative and discriminative models for the tasks of drawing and recognizing Chinese characters.

  18. Global exponential stability and periodicity of reaction-diffusion recurrent neural networks with distributed delays and Dirichlet boundary conditions

    International Nuclear Information System (INIS)

    Lu Junguo; Lu Linji

    2009-01-01

    In this paper, global exponential stability and periodicity of a class of reaction-diffusion recurrent neural networks with distributed delays and Dirichlet boundary conditions are studied by constructing suitable Lyapunov functionals and utilizing some inequality techniques. We first prove global exponential convergence to 0 of the difference between any two solutions of the original neural networks, the existence and uniqueness of equilibrium is the direct results of this procedure. This approach is different from the usually used one where the existence, uniqueness of equilibrium and stability are proved in two separate steps. Secondly, we prove periodicity. Sufficient conditions ensuring the existence, uniqueness, and global exponential stability of the equilibrium and periodic solution are given. These conditions are easy to verify and our results play an important role in the design and application of globally exponentially stable neural circuits and periodic oscillatory neural circuits.

  19. Self-reported empathy and neural activity during action imitation and observation in schizophrenia.

    Science.gov (United States)

    Horan, William P; Iacoboni, Marco; Cross, Katy A; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K; Green, Michael F

    2014-01-01

    Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, or simply observed finger movements and facial emotional expressions. Between-group activation differences, as well as relationships between activation and self-reported empathy, were evaluated. Both patients and controls similarly activated neural systems previously associated with these tasks. We found no significant between-group differences in task-related activations. There were, however, between-group differences in the correlation between self-reported empathy and right inferior frontal (pars opercularis) activity during observation of facial emotional expressions. As in previous studies, controls demonstrated a positive association between brain activity and empathy scores. In contrast, the pattern in the patient group reflected a negative association between brain activity and empathy. Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy.

  20. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    Directory of Open Access Journals (Sweden)

    S. N. Naikwad

    2009-01-01

    Full Text Available A focused time lagged recurrent neural network (FTLR NN with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes temporal relationship in the input-output mappings, time lagged recurrent neural network is particularly used for identification purpose. The standard back propagation algorithm with momentum term has been proposed in this model. The various parameters like number of processing elements, number of hidden layers, training and testing percentage, learning rule and transfer function in hidden and output layer are investigated on the basis of performance measures like MSE, NMSE, and correlation coefficient on testing data set. Finally effects of different norms are tested along with variation in gamma memory filter. It is demonstrated that dynamic NN model has a remarkable system identification capability for the problems considered in this paper. Thus FTLR NN with gamma memory filter can be used to learn underlying highly nonlinear dynamics of the system, which is a major contribution of this paper.

  1. Self-enhancement learning: target-creating learning and its application to self-organizing maps.

    Science.gov (United States)

    Kamimura, Ryotaro

    2011-05-01

    In this article, we propose a new learning method called "self-enhancement learning." In this method, targets for learning are not given from the outside, but they can be spontaneously created within a neural network. To realize the method, we consider a neural network with two different states, namely, an enhanced and a relaxed state. The enhanced state is one in which the network responds very selectively to input patterns, while in the relaxed state, the network responds almost equally to input patterns. The gap between the two states can be reduced by minimizing the Kullback-Leibler divergence between the two states with free energy. To demonstrate the effectiveness of this method, we applied self-enhancement learning to the self-organizing maps, or SOM, in which lateral interactions were added to an enhanced state. We applied the method to the well-known Iris, wine, housing and cancer machine learning database problems. In addition, we applied the method to real-life data, a student survey. Experimental results showed that the U-matrices obtained were similar to those produced by the conventional SOM. Class boundaries were made clearer in the housing and cancer data. For all the data, except for the cancer data, better performance could be obtained in terms of quantitative and topological errors. In addition, we could see that the trustworthiness and continuity, referring to the quality of neighborhood preservation, could be improved by the self-enhancement learning. Finally, we used modern dimensionality reduction methods and compared their results with those obtained by the self-enhancement learning. The results obtained by the self-enhancement were not superior to but comparable with those obtained by the modern dimensionality reduction methods.

  2. [The roles of otolith organs in the recurrence primary benign paroxysmal positional vertigo].

    Science.gov (United States)

    Zhou, Xiaowei; Yu, Youjun; Wu, Ziming; Liu, Xinjian; Chen, Xianbing

    2015-09-01

    To explore the roles of otolith organs in the occurrence and recurrence of primary benign paroxysmal positional vertigo (BPPV) by vestibular evoked myogenic potential (VEMP) test. We enrolled 17 recurrent primary BPPV patients and 42 non-recurrent primary BPPV patients between September 2014 and November 2014. All patients underwent VEMP tests, including cervical vestibular evoked myogenic potential (cVEMP and ocular vestibular evoked myogenic potential (oVEMP) tests. The abnormal case was defined as non-elicitation or asymmetry rate between bilateral sides is larger than 29%. Significant difference was found in abnormal rate between cVEMP and oVEMP (P 0.05). No significant difference was found in sex and age between recurrent and non-recurrent groups (P > 0.05). The impairment of otolith organs, especially the utricle, is related to primary BPPV. Dysfunction of utricle may play a role in recurrence of BPPV. Recurrence of BPPV is not correlated with sex and age.

  3. Automatic lithofacies segmentation from well-logs data. A comparative study between the Self-Organizing Map (SOM) and Walsh transform

    Science.gov (United States)

    Aliouane, Leila; Ouadfeul, Sid-Ali; Rabhi, Abdessalem; Rouina, Fouzi; Benaissa, Zahia; Boudella, Amar

    2013-04-01

    The main goal of this work is to realize a comparison between two lithofacies segmentation techniques of reservoir interval. The first one is based on the Kohonen's Self-Organizing Map neural network machine. The second technique is based on the Walsh transform decomposition. Application to real well-logs data of two boreholes located in the Algerian Sahara shows that the Self-organizing map is able to provide more lithological details that the obtained lithofacies model given by the Walsh decomposition. Keywords: Comparison, Lithofacies, SOM, Walsh References: 1)Aliouane, L., Ouadfeul, S., Boudella, A., 2011, Fractal analysis based on the continuous wavelet transform and lithofacies classification from well-logs data using the self-organizing map neural network, Arabian Journal of geosciences, doi: 10.1007/s12517-011-0459-4 2) Aliouane, L., Ouadfeul, S., Djarfour, N., Boudella, A., 2012, Petrophysical Parameters Estimation from Well-Logs Data Using Multilayer Perceptron and Radial Basis Function Neural Networks, Lecture Notes in Computer Science Volume 7667, 2012, pp 730-736, doi : 10.1007/978-3-642-34500-5_86 3)Ouadfeul, S. and Aliouane., L., 2011, Multifractal analysis revisited by the continuous wavelet transform applied in lithofacies segmentation from well-logs data, International journal of applied physics and mathematics, Vol01 N01. 4) Ouadfeul, S., Aliouane, L., 2012, Lithofacies Classification Using the Multilayer Perceptron and the Self-organizing Neural Networks, Lecture Notes in Computer Science Volume 7667, 2012, pp 737-744, doi : 10.1007/978-3-642-34500-5_87 5) Weisstein, Eric W. "Fast Walsh Transform." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/FastWalshTransform.html

  4. Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition

    OpenAIRE

    Li, Xiangang; Wu, Xihong

    2014-01-01

    Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed an...

  5. A Heuristic Approach to Intra-Brain Communications Using Chaos in a Recurrent Neural Network Model

    Science.gov (United States)

    Soma, Ken-ichiro; Mori, Ryota; Sato, Ryuichi; Nara, Shigetoshi

    2011-09-01

    To approach functional roles of chaos in brain, a heuristic model to consider mechanisms of intra-brain communications is proposed. The key idea is to use chaos in firing pattern dynamics of a recurrent neural network consisting of birary state neurons, as propagation medium of pulse signals. Computer experiments and numerical methods are introduced to evaluate signal transport characteristics by calculating correlation functions between sending neurons and receiving neurons of pulse signals.

  6. Modeling the dynamics of the lead bismuth eutectic experimental accelerator driven system by an infinite impulse response locally recurrent neural network

    International Nuclear Information System (INIS)

    Zio, Enrico; Pedroni, Nicola; Broggi, Matteo; Golea, Lucia Roxana

    2009-01-01

    In this paper, an infinite impulse response locally recurrent neural network (IIR-LRNN) is employed for modelling the dynamics of the Lead Bismuth Eutectic eXperimental Accelerator Driven System (LBE-XADS). The network is trained by recursive back-propagation (RBP) and its ability in estimating transients is tested under various conditions. The results demonstrate the robustness of the locally recurrent scheme in the reconstruction of complex nonlinear dynamic relationships

  7. Improved delay-dependent globally asymptotic stability of delayed uncertain recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Yan, Ji; Bao-Tong, Cui

    2010-01-01

    In this paper, we have improved delay-dependent stability criteria for recurrent neural networks with a delay varying over a range and Markovian jumping parameters. The criteria improve over some previous ones in that they have fewer matrix variables yet less conservatism. In addition, a numerical example is provided to illustrate the applicability of the result using the linear matrix inequality toolbox in MATLAB. (general)

  8. Dynamic cultural influences on neural representations of the self.

    Science.gov (United States)

    Chiao, Joan Y; Harada, Tokiko; Komeda, Hidetsugu; Li, Zhang; Mano, Yoko; Saito, Daisuke; Parrish, Todd B; Sadato, Norihiro; Iidaka, Tetsuya

    2010-01-01

    People living in multicultural environments often encounter situations which require them to acquire different cultural schemas and to switch between these cultural schemas depending on their immediate sociocultural context. Prior behavioral studies show that priming cultural schemas reliably impacts mental processes and behavior underlying self-concept. However, less well understood is whether or not cultural priming affects neurobiological mechanisms underlying the self. Here we examined whether priming cultural values of individualism and collectivism in bicultural individuals affects neural activity in cortical midline structures underlying self-relevant processes using functional magnetic resonance imaging. Biculturals primed with individualistic values showed increased activation within medial prefrontal cortex (MPFC) and posterior cingulate cortex (PCC) during general relative to contextual self-judgments, whereas biculturals primed with collectivistic values showed increased response within MPFC and PCC during contextual relative to general self-judgments. Moreover, degree of cultural priming was positively correlated with degree of MPFC and PCC activity during culturally congruent self-judgments. These findings illustrate the dynamic influence of culture on neural representations underlying the self and, more broadly, suggest a neurobiological basis by which people acculturate to novel environments.

  9. Contemporary deep recurrent learning for recognition

    Science.gov (United States)

    Iftekharuddin, K. M.; Alam, M.; Vidyaratne, L.

    2017-05-01

    Large-scale feed-forward neural networks have seen intense application in many computer vision problems. However, these networks can get hefty and computationally intensive with increasing complexity of the task. Our work, for the first time in literature, introduces a Cellular Simultaneous Recurrent Network (CSRN) based hierarchical neural network for object detection. CSRN has shown to be more effective to solving complex tasks such as maze traversal and image processing when compared to generic feed forward networks. While deep neural networks (DNN) have exhibited excellent performance in object detection and recognition, such hierarchical structure has largely been absent in neural networks with recurrency. Further, our work introduces deep hierarchy in SRN for object recognition. The simultaneous recurrency results in an unfolding effect of the SRN through time, potentially enabling the design of an arbitrarily deep network. This paper shows experiments using face, facial expression and character recognition tasks using novel deep recurrent model and compares recognition performance with that of generic deep feed forward model. Finally, we demonstrate the flexibility of incorporating our proposed deep SRN based recognition framework in a humanoid robotic platform called NAO.

  10. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  11. An Incremental Time-delay Neural Network for Dynamical Recurrent Associative Memory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An incremental time-delay neural network based on synapse growth, which is suitable for dynamic control and learning of autonomous robots, is proposed to improve the learning and retrieving performance of dynamical recurrent associative memory architecture. The model allows steady and continuous establishment of associative memory for spatio-temporal regularities and time series in discrete sequence of inputs. The inserted hidden units can be taken as the long-term memories that expand the capacity of network and sometimes may fade away under certain condition. Preliminary experiment has shown that this incremental network may be a promising approach to endow autonomous robots with the ability of adapting to new data without destroying the learned patterns. The system also benefits from its potential chaos character for emergence.

  12. Recurrent networks for wave forecasting

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper presents an application of the Artificial Neural Network, namely Backpropagation Recurrent Neural Network (BRNN) with rprop update algorithm for wave forecasting...

  13. Principles of neural information processing

    CERN Document Server

    Seelen, Werner v

    2016-01-01

    In this fundamental book the authors devise a framework that describes the working of the brain as a whole. It presents a comprehensive introduction to the principles of Neural Information Processing as well as recent and authoritative research. The books´ guiding principles are the main purpose of neural activity, namely, to organize behavior to ensure survival, as well as the understanding of the evolutionary genesis of the brain. Among the developed principles and strategies belong self-organization of neural systems, flexibility, the active interpretation of the world by means of construction and prediction as well as their embedding into the world, all of which form the framework of the presented description. Since, in brains, their partial self-organization, the lifelong adaptation and their use of various methods of processing incoming information are all interconnected, the authors have chosen not only neurobiology and evolution theory as a basis for the elaboration of such a framework, but also syst...

  14. Distributed Recurrent Neural Forward Models with Synaptic Adaptation and CPG-based control for Complex Behaviors of Walking Robots

    Directory of Open Access Journals (Sweden)

    Sakyasingha eDasgupta

    2015-09-01

    Full Text Available Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures with the underlying neural mechanisms. The neural mechanisms consist of 1 central pattern generator based control for generating basic rhythmic patterns and coordinated movements, 2 distributed (at each leg recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and 3 searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps as well as climbing over high obstacles. Furthermore we demonstrate that the newly developed recurrent network based approach to sensorimotor prediction outperforms the previous state of the art adaptive neuron

  15. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    Science.gov (United States)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  16. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network.

    Science.gov (United States)

    Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long

    2017-01-01

    A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.

  17. Control of self-organizing nonlinear systems

    CERN Document Server

    Klapp, Sabine; Hövel, Philipp

    2016-01-01

    The book summarizes the state-of-the-art of research on control of self-organizing nonlinear systems with contributions from leading international experts in the field. The first focus concerns recent methodological developments including control of networks and of noisy and time-delayed systems. As a second focus, the book features emerging concepts of application including control of quantum systems, soft condensed matter, and biological systems. Special topics reflecting the active research in the field are the analysis and control of chimera states in classical networks and in quantum systems, the mathematical treatment of multiscale systems, the control of colloidal and quantum transport, the control of epidemics and of neural network dynamics.

  18. Marginally Stable Triangular Recurrent Neural Network Architecture for Time Series Prediction.

    Science.gov (United States)

    Sivakumar, Seshadri; Sivakumar, Shyamala

    2017-09-25

    This paper introduces a discrete-time recurrent neural network architecture using triangular feedback weight matrices that allows a simplified approach to ensuring network and training stability. The triangular structure of the weight matrices is exploited to readily ensure that the eigenvalues of the feedback weight matrix represented by the block diagonal elements lie on the unit circle in the complex z-plane by updating these weights based on the differential of the angular error variable. Such placement of the eigenvalues together with the extended close interaction between state variables facilitated by the nondiagonal triangular elements, enhances the learning ability of the proposed architecture. Simulation results show that the proposed architecture is highly effective in time-series prediction tasks associated with nonlinear and chaotic dynamic systems with underlying oscillatory modes. This modular architecture with dual upper and lower triangular feedback weight matrices mimics fully recurrent network architectures, while maintaining learning stability with a simplified training process. While training, the block-diagonal weights (hence the eigenvalues) of the dual triangular matrices are constrained to the same values during weight updates aimed at minimizing the possibility of overfitting. The dual triangular architecture also exploits the benefit of parsing the input and selectively applying the parsed inputs to the two subnetworks to facilitate enhanced learning performance.

  19. Hierarchical Self Organizing Map for Novelty Detection using Mobile Robot with Robust Sensor

    International Nuclear Information System (INIS)

    Sha'abani, M N A H; Miskon, M F; Sakidin, H

    2013-01-01

    This paper presents a novelty detection method based on Self Organizing Map neural network using a mobile robot. Based on hierarchical neural network, the network is divided into three networks; position, orientation and sensor measurement network. A simulation was done to demonstrate and validate the proposed method using MobileSim. Three cases of abnormal events; new, missing and shifted objects are employed for performance evaluation. The result of detection was then filtered for false positive detection. The result shows that the inspection produced less than 2% false positive detection at high sensitivity settings

  20. Guided Self-Help Treatment for Recurrent Binge Eating: Replication and Extension

    Science.gov (United States)

    DeBar, Lynn L.; Striegel-Moore, Ruth H.; Wilson, G. Terence; Perrin, Nancy; Yarborough, Bobbi Jo; Dickerson, John; Lynch, Frances; Rosselli, Francine; Kraemer, Helena C.

    2014-01-01

    Objective The aim of this study was to replicate and extend results of a previous blended efficacy and effectiveness trial of a low-intensity, manual-based guided self-help form of cognitive-behavioral therapy (CBT-GSH) for the treatment of binge eating disorders in a large health maintenance organization (HMO) and to compare them with usual care. Methods To extend earlier findings, the investigators modified earlier recruitment and assessment approaches and conducted a randomized clinical trial to better reflect procedures that may be reasonably carried out in real-world practices. The intervention was delivered by master’s-level interventionists to 160 female members of a health maintenance organization who met diagnostic criteria for recurrent binge eating. Data collected at baseline, immediately posttreatment, and at six- and 12-month follow-ups were used in intent-to-treat analyses. Results At the 12-month follow-up, CBT-GSH resulted in greater remission from binge eating (35%, N=26) than usual care (14%, N=10) (number needed to treat=5). The CBT-GSH group also demonstrated greater improvements in dietary restraint (d=.71) and eating, shape, and weight concerns (d=1.10, 1.24, and .98, respectively) but not weight change. Conclusions Replication of the pattern of previous findings suggests that CBT-GSH is a robust treatment for patients with recurrent binge eating. The magnitude of changes was significantly smaller than in the original study, however, suggesting that patients recruited and assessed with less intensive procedures may respond differently from their counterparts enrolled in trials requiring more comprehensive procedures. PMID:21459987

  1. Neural processing of race during imitation: self-similarity versus social status

    Science.gov (United States)

    Reynolds Losin, Elizabeth A.; Cross, Katy A.; Iacoboni, Marco; Dapretto, Mirella

    2017-01-01

    People preferentially imitate others who are similar to them or have high social status. Such imitative biases are thought to have evolved because they increase the efficiency of cultural acquisition. Here we focused on distinguishing between self-similarity and social status as two candidate mechanisms underlying neural responses to a person’s race during imitation. We used fMRI to measure neural responses when 20 African American (AA) and 20 European American (EA) young adults imitated AA, EA and Chinese American (CA) models and also passively observed their gestures and faces. We found that both AA and EA participants exhibited more activity in lateral fronto-parietal and visual regions when imitating AAs compared to EAs or CAs. These results suggest that racial self-similarity is not likely to modulate neural responses to race during imitation, in contrast with findings from previous neuroimaging studies of face perception and action observation. Furthermore, AA and EA participants associated AAs with lower social status than EAs or CAs, suggesting that the social status associated with different racial groups may instead modulate neural activity during imitation of individuals from those groups. Taken together, these findings suggest that neural responses to race during imitation are driven by socially-learned associations rather than self-similarity. This may reflect the adaptive role of imitation in social learning, where learning from higher-status models can be more beneficial. This study provides neural evidence consistent with evolutionary theories of cultural acquisition. PMID:23813738

  2. Turkish Migrant Women with Recurrent Depression: Results from Community-based Self-help Groups.

    Science.gov (United States)

    Siller, Heidi; Renner, Walter; Juen, Barbara

    2017-01-01

    The study focuses on psychosocial functioning of female Turkish immigrants in Austria with recurrent depressive disorder participating in self-help groups. Self-help groups guided by group leaders of Turkish descent should increase autonomy in participants, providing the opportunity to follow their ethnic health beliefs. Turkish immigrant women (n = 43) with recurrent depressive disorder participated in self-help groups over four months. Qualitative data of participants and group leaders, containing interviews, group protocols and supervision protocols of group leaders were analyzed using the qualitative content analysis for effects on psychosocial function, such as interaction with others, illness beliefs and benefit from self-help group. Women reported feelings of being neglected and violated by their husbands. They stated that they had gained strength and had emancipated themselves from their husbands. Self-help groups functioned as social resources and support for changes in participants' lives. Further interventions should integrate the functional value of depressive symptoms and focus on social support systems and social networks.

  3. What Can Psychiatric Disorders Tell Us about Neural Processing of the Self?

    Science.gov (United States)

    Zhao, Weihua; Luo, Lizhu; Li, Qin; Kendrick, Keith M

    2013-01-01

    Many psychiatric disorders are associated with abnormal self-processing. While these disorders also have a wide-range of complex, and often heterogeneous sets of symptoms involving different cognitive, emotional, and motor domains, an impaired sense of self can contribute to many of these. Research investigating self-processing in healthy subjects has facilitated identification of changes in specific neural circuits which may cause altered self-processing in psychiatric disorders. While there is evidence for altered self-processing in many psychiatric disorders, here we will focus on four of the most studied ones, schizophrenia, autism spectrum disorder (ASD), major depression, and borderline personality disorder (BPD). We review evidence for dysfunction in two different neural systems implicated in self-processing, namely the cortical midline system (CMS) and the mirror neuron system (MNS), as well as contributions from altered inter-hemispheric connectivity (IHC). We conclude that while abnormalities in frontal-parietal activity and/or connectivity in the CMS are common to all four disorders there is more disruption of integration between frontal and parietal regions resulting in a shift toward parietal control in schizophrenia and ASD which may contribute to the greater severity and delusional aspects of their symptoms. Abnormalities in the MNS and in IHC are also particularly evident in schizophrenia and ASD and may lead to disturbances in sense of agency and the physical self in these two disorders. A better future understanding of how changes in the neural systems sub-serving self-processing contribute to different aspects of symptom abnormality in psychiatric disorders will require that more studies carry out detailed individual assessments of altered self-processing in conjunction with measurements of neural functioning.

  4. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition.

    Directory of Open Access Journals (Sweden)

    Johannes Bill

    Full Text Available During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.

  5. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition

    Science.gov (United States)

    Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert

    2015-01-01

    During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input. PMID:26284370

  6. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems

    Energy Technology Data Exchange (ETDEWEB)

    Talathi, S. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-06-05

    Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world population. Epileptic patients suffer from chronic unprovoked seizures, which can result in broad spectrum of debilitating medical and social consequences. Since seizures, in general, occur infrequently and are unpredictable, automated seizure detection systems are recommended to screen for seizures during long-term electroencephalogram (EEG) recordings. In addition, systems for early seizure detection can lead to the development of new types of intervention systems that are designed to control or shorten the duration of seizure events. In this article, we investigate the utility of recurrent neural networks (RNNs) in designing seizure detection and early seizure detection systems. We propose a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for seizure detection. We use publicly available data in order to evaluate our method and demonstrate very promising evaluation results with overall accuracy close to 100 %. We also systematically investigate the application of our method for early seizure warning systems. Our method can detect about 98% of seizure events within the first 5 seconds of the overall epileptic seizure duration.

  7. Self-organizing plasmas

    International Nuclear Information System (INIS)

    Hayashi, T.; Sato, T.

    1999-01-01

    The primary purpose of this paper is to extract a grand view of self-organization through an extensive computer simulation of plasmas. The assertion is made that self-organization is governed by three key processes, i.e. the existence of an open complex system, the existence of information (energy) sources and the existence of entropy generation and expulsion processes. We find that self-organization takes place in an intermittent fashion when energy is supplied continuously from outside. In contrast, when the system state is suddenly changed into a non-equilibrium state externally, the system evolves stepwise and reaches a minimum energy state. We also find that the entropy production rate is maximized whenever a new ordered structure is created and that if the entropy generated during the self-organizing process is expelled from the system, then the self-organized structure becomes more prominent and clear. (author)

  8. Cascaded bidirectional recurrent neural networks for protein secondary structure prediction.

    Science.gov (United States)

    Chen, Jinmiao; Chaudhari, Narendra

    2007-01-01

    Protein secondary structure (PSS) prediction is an important topic in bioinformatics. Our study on a large set of non-homologous proteins shows that long-range interactions commonly exist and negatively affect PSS prediction. Besides, we also reveal strong correlations between secondary structure (SS) elements. In order to take into account the long-range interactions and SS-SS correlations, we propose a novel prediction system based on cascaded bidirectional recurrent neural network (BRNN). We compare the cascaded BRNN against another two BRNN architectures, namely the original BRNN architecture used for speech recognition as well as Pollastri's BRNN that was proposed for PSS prediction. Our cascaded BRNN achieves an overall three state accuracy Q3 of 74.38\\%, and reaches a high Segment OVerlap (SOV) of 66.0455. It outperforms the original BRNN and Pollastri's BRNN in both Q3 and SOV. Specifically, it improves the SOV score by 4-6%.

  9. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  10. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  11. Multiplex visibility graphs to investigate recurrent neural network dynamics

    Science.gov (United States)

    Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert

    2017-03-01

    A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.

  12. Frequency of recurrent urinary tract infection in patients with pelvic organ prolapse

    Directory of Open Access Journals (Sweden)

    Töz E

    2015-01-01

    Full Text Available Emrah Töz,1 Sefa Kurt,2 Çagdas Sahin,1 Mehmet Tunç Canda3 1Department of Obstetrics and Gynecology, Tepecik Training and Research Hospital, Izmir, Turkey; 2Department of Obstetrics and Gynecology, Izmir Dokuz Eylül University, Izmir, Turkey; 3Department of Obstetrics and Gynecology, Kent Hospital, Izmir, Turkey Purpose: The aim of the study was to investigate the existence of a relationship between pelvic organ prolapse (POP and recurrent urinary tract infection (UTI. Materials and methods: The hospital database was searched for women diagnosed with pelvic floor disorders and all medical records were reviewed for recurrent UTI, diagnosed by two or more positive urine cultures taken within 12 months of each other. The control group was created using one-to-one matching for age and menopausal status. The prevalence of recurrent UTI in these patients were compared. Results: The mean age of the 210 participants was 54.64±5.15 years. We found no association between POP and recurrent UTI. In the prolapse group, 22 women (21% had recurrent UTI compared with 19 women (18% in the control group (P=0.316. Post-void residual (PVR volumes >50 mL were associated with increased prevalence of recurrent UTI. Conclusion: POP is not a risk factor for recurrent UTI, but women with POP are more likely to have high PVR volumes. High PVR volumes increase the risk of recurrent UTI. Clinical examination and ultrasound assessment of PVR should be performed in all women presenting with prolapse and UTI. Elevated PVR is the most significant risk factor, linking POP with recurrent UTI. Keywords: recurrent urinary tract infection, pelvic organ prolapse, post-void residual

  13. Recurrent neural network approach to quantum signal: coherent state restoration for continuous-variable quantum key distribution

    Science.gov (United States)

    Lu, Weizhao; Huang, Chunhui; Hou, Kun; Shi, Liting; Zhao, Huihui; Li, Zhengmei; Qiu, Jianfeng

    2018-05-01

    In continuous-variable quantum key distribution (CV-QKD), weak signal carrying information transmits from Alice to Bob; during this process it is easily influenced by unknown noise which reduces signal-to-noise ratio, and strongly impacts reliability and stability of the communication. Recurrent quantum neural network (RQNN) is an artificial neural network model which can perform stochastic filtering without any prior knowledge of the signal and noise. In this paper, a modified RQNN algorithm with expectation maximization algorithm is proposed to process the signal in CV-QKD, which follows the basic rule of quantum mechanics. After RQNN, noise power decreases about 15 dBm, coherent signal recognition rate of RQNN is 96%, quantum bit error rate (QBER) drops to 4%, which is 6.9% lower than original QBER, and channel capacity is notably enlarged.

  14. Simulating the dynamics of the neutron flux in a nuclear reactor by locally recurrent neural networks

    International Nuclear Information System (INIS)

    Cadini, F.; Zio, E.; Pedroni, N.

    2007-01-01

    In this paper, a locally recurrent neural network (LRNN) is employed for approximating the temporal evolution of a nonlinear dynamic system model of a simplified nuclear reactor. To this aim, an infinite impulse response multi-layer perceptron (IIR-MLP) is trained according to a recursive back-propagation (RBP) algorithm. The network nodes contain internal feedback paths and their connections are realized by means of IIR synaptic filters, which provide the LRNN with the necessary system state memory

  15. Identification of lithofacies using Kohonen self-organizing maps

    Science.gov (United States)

    Chang, H.-C.; Kopaska-Merkel, D. C.; Chen, H.-C.

    2002-01-01

    Lithofacies identification is a primary task in reservoir characterization. Traditional techniques of lithofacies identification from core data are costly, and it is difficult to extrapolate to non-cored wells. We present a low-cost automated technique using Kohonen self-organizing maps (SOMs) to identify systematically and objectively lithofacies from well log data. SOMs are unsupervised artificial neural networks that map the input space into clusters in a topological form whose organization is related to trends in the input data. A case study used five wells located in Appleton Field, Escambia County, Alabama (Smackover Formation, limestone and dolomite, Oxfordian, Jurassic). A five-input, one-dimensional output approach is employed, assuming the lithofacies are in ascending/descending order with respect to paleoenvironmental energy levels. To consider the possible appearance of new logfacies not seen in training mode, which may potentially appear in test wells, the maximum number of outputs is set to 20 instead of four, the designated number of lithosfacies in the study area. This study found eleven major clusters. The clusters were compared to depositional lithofacies identified by manual core examination. The clusters were ordered by the SOM in a pattern consistent with environmental gradients inferred from core examination: bind/boundstone, grainstone, packstone, and wackestone. This new approach predicted lithofacies identity from well log data with 78.8% accuracy which is more accurate than using a backpropagation neural network (57.3%). The clusters produced by the SOM are ordered with respect to paleoenvironmental energy levels. This energy-related clustering provides geologists and petroleum engineers with valuable geologic information about the logfacies and their interrelationships. This advantage is not obtained in backpropagation neural networks and adaptive resonance theory neural networks. ?? 2002 Elsevier Science Ltd. All rights reserved.

  16. New results on global exponential stability of recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Xu Shengyuan; Chu Yuming; Lu Junwei

    2006-01-01

    This Letter provides new sufficient conditions for the existence, uniqueness and global exponential stability of the equilibrium point of recurrent neural networks with time-varying delays by employing Lyapunov functions and using the Halanay inequality. The time-varying delays are not necessarily differentiable. Both Lipschitz continuous activation functions and monotone nondecreasing activation functions are considered. The derived stability criteria are expressed in terms of linear matrix inequalities (LMIs), which can be checked easily by resorting to recently developed algorithms solving LMIs. Furthermore, the proposed stability results are less conservative than some previous ones in the literature, which is demonstrated via some numerical examples

  17. New results on global exponential stability of recurrent neural networks with time-varying delays

    Energy Technology Data Exchange (ETDEWEB)

    Xu Shengyuan [Department of Automation, Nanjing University of Science and Technology, Nanjing 210094 (China)]. E-mail: syxu02@yahoo.com.cn; Chu Yuming [Department of Mathematics, Huzhou Teacher' s College, Huzhou, Zhejiang 313000 (China); Lu Junwei [School of Electrical and Automation Engineering, Nanjing Normal University, 78 Bancang Street, Nanjing, 210042 (China)

    2006-04-03

    This Letter provides new sufficient conditions for the existence, uniqueness and global exponential stability of the equilibrium point of recurrent neural networks with time-varying delays by employing Lyapunov functions and using the Halanay inequality. The time-varying delays are not necessarily differentiable. Both Lipschitz continuous activation functions and monotone nondecreasing activation functions are considered. The derived stability criteria are expressed in terms of linear matrix inequalities (LMIs), which can be checked easily by resorting to recently developed algorithms solving LMIs. Furthermore, the proposed stability results are less conservative than some previous ones in the literature, which is demonstrated via some numerical examples.

  18. Information content of neural networks with self-control and variable activity

    International Nuclear Information System (INIS)

    Bolle, D.; Amari, S.I.; Dominguez Carreta, D.R.C.; Massolo, G.

    2001-01-01

    A self-control mechanism for the dynamics of neural networks with variable activity is discussed using a recursive scheme for the time evolution of the local field. It is based upon the introduction of a self-adapting time-dependent threshold as a function of both the neural and pattern activity in the network. This mechanism leads to an improvement of the information content of the network as well as an increase of the storage capacity and the basins of attraction. Different architectures are considered and the results are compared with numerical simulations

  19. The neural correlates of visual self-recognition.

    Science.gov (United States)

    Devue, Christel; Brédart, Serge

    2011-03-01

    This paper presents a review of studies that were aimed at determining which brain regions are recruited during visual self-recognition, with a particular focus on self-face recognition. A complex bilateral network, involving frontal, parietal and occipital areas, appears to be associated with self-face recognition, with a particularly high implication of the right hemisphere. Results indicate that it remains difficult to determine which specific cognitive operation is reflected by each recruited brain area, in part due to the variability of used control stimuli and experimental tasks. A synthesis of the interpretations provided by previous studies is presented. The relevance of using self-recognition as an indicator of self-awareness is discussed. We argue that a major aim of future research in the field should be to identify more clearly the cognitive operations induced by the perception of the self-face, and search for dissociations between neural correlates and cognitive components. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Sequence-specific bias correction for RNA-seq data using recurrent neural networks.

    Science.gov (United States)

    Zhang, Yao-Zhong; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru

    2017-01-25

    The recent success of deep learning techniques in machine learning and artificial intelligence has stimulated a great deal of interest among bioinformaticians, who now wish to bring the power of deep learning to bare on a host of bioinformatical problems. Deep learning is ideally suited for biological problems that require automatic or hierarchical feature representation for biological data when prior knowledge is limited. In this work, we address the sequence-specific bias correction problem for RNA-seq data redusing Recurrent Neural Networks (RNNs) to model nucleotide sequences without pre-determining sequence structures. The sequence-specific bias of a read is then calculated based on the sequence probabilities estimated by RNNs, and used in the estimation of gene abundance. We explore the application of two popular RNN recurrent units for this task and demonstrate that RNN-based approaches provide a flexible way to model nucleotide sequences without knowledge of predetermined sequence structures. Our experiments show that training a RNN-based nucleotide sequence model is efficient and RNN-based bias correction methods compare well with the-state-of-the-art sequence-specific bias correction method on the commonly used MAQC-III data set. RNNs provides an alternative and flexible way to calculate sequence-specific bias without explicitly pre-determining sequence structures.

  1. Comparison of brass alloys composition by laser-induced breakdown spectroscopy and self-organizing maps

    Energy Technology Data Exchange (ETDEWEB)

    Pagnotta, Stefano; Grifoni, Emanuela; Legnaioli, Stefano [Applied and Laser Spectroscopy Laboratory, ICCOM-CNR, Research Area of Pisa, Via G. Moruzzi 1, 56124 Pisa (Italy); Lezzerini, Marco [Department of Earth Sciences, University of Pisa, Via S. Maria 53, 56126 Pisa (Italy); Lorenzetti, Giulia [Applied and Laser Spectroscopy Laboratory, ICCOM-CNR, Research Area of Pisa, Via G. Moruzzi 1, 56124 Pisa (Italy); Palleschi, Vincenzo, E-mail: vincenzo.palleschi@cnr.it [Applied and Laser Spectroscopy Laboratory, ICCOM-CNR, Research Area of Pisa, Via G. Moruzzi 1, 56124 Pisa (Italy); Department of Civilizations and Forms of Knowledge, University of Pisa, Via L. Galvani 1, 56126 Pisa (Italy)

    2015-01-01

    In this paper we face the problem of assessing similarities in the composition of different metallic alloys, using the laser-induced breakdown spectroscopy technique. The possibility of determining the degree of similarity through the use of artificial neural networks and self-organizing maps is discussed. As an example, we present a case study involving the comparison of two historical brass samples, very similar in their composition. The results of the paper can be extended to many other situations, not necessarily associated with cultural heritage and archeological studies, where objects with similar composition have to be compared. - Highlights: • A method for assessing the similarity of materials analyzed by LIBS is proposed. • Two very similar fragments of historical brass were analyzed. • Using a simple artificial neural network the composition of the two alloys was determined. • The composition of the two brass alloys was the same within the experimental error. • Using self-organizing maps, the probability of the alloys to have the same composition was assessed.

  2. Comparison of brass alloys composition by laser-induced breakdown spectroscopy and self-organizing maps

    International Nuclear Information System (INIS)

    Pagnotta, Stefano; Grifoni, Emanuela; Legnaioli, Stefano; Lezzerini, Marco; Lorenzetti, Giulia; Palleschi, Vincenzo

    2015-01-01

    In this paper we face the problem of assessing similarities in the composition of different metallic alloys, using the laser-induced breakdown spectroscopy technique. The possibility of determining the degree of similarity through the use of artificial neural networks and self-organizing maps is discussed. As an example, we present a case study involving the comparison of two historical brass samples, very similar in their composition. The results of the paper can be extended to many other situations, not necessarily associated with cultural heritage and archeological studies, where objects with similar composition have to be compared. - Highlights: • A method for assessing the similarity of materials analyzed by LIBS is proposed. • Two very similar fragments of historical brass were analyzed. • Using a simple artificial neural network the composition of the two alloys was determined. • The composition of the two brass alloys was the same within the experimental error. • Using self-organizing maps, the probability of the alloys to have the same composition was assessed

  3. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  4. Study of neural cells on organic semiconductor ultra thin films

    Energy Technology Data Exchange (ETDEWEB)

    Bystrenova, Eva; Tonazzini, Ilaria; Stoliar, Pablo; Greco, Pierpaolo; Lazar, Adina; Dutta, Soumya; Dionigi, Chiara; Cacace, Marcello; Biscarini, Fabio [ISMN-CNR, Bologna (Italy); Jelitai, Marta; Madarasz, Emilia [IEM- HAS, Budapest (Hungary); Huth, Martin; Nickel, Bert [LMU, Munich (Germany); Martini, Claudia [Dept. PNPB, Univ. of Pisa (Italy)

    2008-07-01

    Many technological advances are currently being developed for nano-fabrication, offering the ability to create and control patterns of soft materials. We report the deposition of cells on organic semiconductor ultra-thin films. This is a first step towards the development of active bio/non bio systems for electrical transduction. Thin films of pentacene, whose thickness was systematically varied, were grown by high vacuum sublimation. We report adhesion, growth, and differentiation of human astroglial cells and mouse neural stem cells on an organic semiconductor. Viability of astroglial cells in time was measured as a function of the roughness and the characteristic morphology of ultra thin organic film, as well as the features of the patterned molecules. Optical fluorescence microscope coupled to atomic force microscope was used to monitor the presence, density and shape of deposited cells. Neural stem cells remain viable, differentiate by retinoic acid and form dense neuronal networks. We have shown the possibility to integrate living neural cells on organic semiconductor thin films.

  5. Neural activity associated with self-reflection.

    Science.gov (United States)

    Herwig, Uwe; Kaffenberger, Tina; Schell, Caroline; Jäncke, Lutz; Brühl, Annette B

    2012-05-24

    Self-referential cognitions are important for self-monitoring and self-regulation. Previous studies have addressed the neural correlates of self-referential processes in response to or related to external stimuli. We here investigated brain activity associated with a short, exclusively mental process of self-reflection in the absence of external stimuli or behavioural requirements. Healthy subjects reflected either on themselves, a personally known or an unknown person during functional magnetic resonance imaging (fMRI). The reflection period was initialized by a cue and followed by photographs of the respective persons (perception of pictures of oneself or the other person). Self-reflection, compared with reflecting on the other persons and to a major part also compared with perceiving photographs of one-self, was associated with more prominent dorsomedial and lateral prefrontal, insular, anterior and posterior cingulate activations. Whereas some of these areas showed activity in the "other"-conditions as well, self-selective characteristics were revealed in right dorsolateral prefrontal and posterior cingulate cortex for self-reflection; in anterior cingulate cortex for self-perception and in the left inferior parietal lobe for self-reflection and -perception. Altogether, cingulate, medial and lateral prefrontal, insular and inferior parietal regions show relevance for self-related cognitions, with in part self-specificity in terms of comparison with the known-, unknown- and perception-conditions. Notably, the results are obtained here without behavioural response supporting the reliability of this methodological approach of applying a solely mental intervention. We suggest considering the reported structures when investigating psychopathologically affected self-related processing.

  6. The synaptic properties of cells define the hallmarks of interval timing in a recurrent neural network.

    Science.gov (United States)

    Pérez, Oswaldo; Merchant, Hugo

    2018-04-03

    Extensive research has described two key features of interval timing. The bias property is associated with accuracy and implies that time is overestimated for short intervals and underestimated for long intervals. The scalar property is linked to precision and states that the variability of interval estimates increases as a function of interval duration. The neural mechanisms behind these properties are not well understood. Here we implemented a recurrent neural network that mimics a cortical ensemble and includes cells that show paired-pulse facilitation and slow inhibitory synaptic currents. The network produces interval selective responses and reproduces both bias and scalar properties when a Bayesian decoder reads its activity. Notably, the interval-selectivity, timing accuracy, and precision of the network showed complex changes as a function of the decay time constants of the modeled synaptic properties and the level of background activity of the cells. These findings suggest that physiological values of the time constants for paired-pulse facilitation and GABAb, as well as the internal state of the network, determine the bias and scalar properties of interval timing. Significant Statement Timing is a fundamental element of complex behavior, including music and language. Temporal processing in a wide variety of contexts shows two primary features: time estimates exhibit a shift towards the mean (the bias property) and are more variable for longer intervals (the scalar property). We implemented a recurrent neural network that includes long-lasting synaptic currents, which can not only produce interval selective responses but also follow the bias and scalar properties. Interestingly, only physiological values of the time constants for paired-pulse facilitation and GABAb, as well as intermediate background activity within the network can reproduce the two key features of interval timing. Copyright © 2018 the authors.

  7. 75 FR 63878 - Self-Regulatory Organizations; Self-Regulatory Organizations; Notice of Filing and Immediate...

    Science.gov (United States)

    2010-10-18

    ...-Regulatory Organizations; Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of...(b)(1). \\2\\ 17 CFR 240.19b-4. I. Self-Regulatory Organization's Statement of the Terms of Substance... Public Reference Room. II. Self-Regulatory Organization's Statement of the Purpose of, and Statutory...

  8. Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks

    Science.gov (United States)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong

    2017-03-01

    Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.

  9. MACHINE LEARNING FOR THE SELF-ORGANIZATION OF DISTRIBUTED SYSTEMS IN ECONOMIC APPLICATIONS

    OpenAIRE

    Jerzy Balicki; Waldemar Korłub

    2017-01-01

    In this paper, an application of machine learning to the problem of self-organization of distributed systems has been discussed with regard to economic applications, with particular emphasis on supervised neural network learning to predict stock investments and some ratings of companies. In addition, genetic programming can play an important role in the preparation and testing of several financial information systems. For this reason, machine learning applications have been discussed because ...

  10. Actomyosin-based Self-organization of cell internalization during C. elegans gastrulation

    Directory of Open Access Journals (Sweden)

    Pohl Christian

    2012-11-01

    Full Text Available Abstract Background Gastrulation is a key transition in embryogenesis; it requires self-organized cellular coordination, which has to be both robust to allow efficient development and plastic to provide adaptability. Despite the conservation of gastrulation as a key event in Metazoan embryogenesis, the morphogenetic mechanisms of self-organization (how global order or coordination can arise from local interactions are poorly understood. Results We report a modular structure of cell internalization in Caenorhabditis elegans gastrulation that reveals mechanisms of self-organization. Cells that internalize during gastrulation show apical contractile flows, which are correlated with centripetal extensions from surrounding cells. These extensions converge to seal over the internalizing cells in the form of rosettes. This process represents a distinct mode of monolayer remodeling, with gradual extrusion of the internalizing cells and simultaneous tissue closure without an actin purse-string. We further report that this self-organizing module can adapt to severe topological alterations, providing evidence of scalability and plasticity of actomyosin-based patterning. Finally, we show that globally, the surface cell layer undergoes coplanar division to thin out and spread over the internalizing mass, which resembles epiboly. Conclusions The combination of coplanar division-based spreading and recurrent local modules for piecemeal internalization constitutes a system-level solution of gradual volume rearrangement under spatial constraint. Our results suggest that the mode of C. elegans gastrulation can be unified with the general notions of monolayer remodeling and with distinct cellular mechanisms of actomyosin-based morphogenesis.

  11. Neural correlates of the processing of self-referent emotional information in bulimia nervosa.

    Science.gov (United States)

    Pringle, A; Ashworth, F; Harmer, C J; Norbury, R; Cooper, M J

    2011-10-01

    There is increasing interest in understanding the roles of distorted beliefs about the self, ostensibly unrelated to eating, weight and shape, in eating disorders (EDs), but little is known about their neural correlates. We therefore used functional magnetic resonance imaging to investigate the neural correlates of self-referent emotional processing in EDs. During the scan, unmedicated patients with bulimia nervosa (n=11) and healthy controls (n=16) responded to personality words previously found to be related to negative self beliefs in EDs and depression. Rating of the negative personality descriptors resulted in reduced activation in patients compared to controls in parietal, occipital and limbic areas including the amygdala. There was no evidence that reduced activity in patients was secondary to increased cognitive control. Different patterns of neural activation between patients and controls may be the result of either habituation to personally relevant negative self beliefs or of emotional blunting in patients. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. A spiking neural network model of self-organized pattern recognition in the early mammalian olfactory system

    Science.gov (United States)

    Kaplan, Bernhard A.; Lansner, Anders

    2014-01-01

    Olfactory sensory information passes through several processing stages before an odor percept emerges. The question how the olfactory system learns to create odor representations linking those different levels and how it learns to connect and discriminate between them is largely unresolved. We present a large-scale network model with single and multi-compartmental Hodgkin–Huxley type model neurons representing olfactory receptor neurons (ORNs) in the epithelium, periglomerular cells, mitral/tufted cells and granule cells in the olfactory bulb (OB), and three types of cortical cells in the piriform cortex (PC). Odor patterns are calculated based on affinities between ORNs and odor stimuli derived from physico-chemical descriptors of behaviorally relevant real-world odorants. The properties of ORNs were tuned to show saturated response curves with increasing concentration as seen in experiments. On the level of the OB we explored the possibility of using a fuzzy concentration interval code, which was implemented through dendro-dendritic inhibition leading to winner-take-all like dynamics between mitral/tufted cells belonging to the same glomerulus. The connectivity from mitral/tufted cells to PC neurons was self-organized from a mutual information measure and by using a competitive Hebbian–Bayesian learning algorithm based on the response patterns of mitral/tufted cells to different odors yielding a distributed feed-forward projection to the PC. The PC was implemented as a modular attractor network with a recurrent connectivity that was likewise organized through Hebbian–Bayesian learning. We demonstrate the functionality of the model in a one-sniff-learning and recognition task on a set of 50 odorants. Furthermore, we study its robustness against noise on the receptor level and its ability to perform concentration invariant odor recognition. Moreover, we investigate the pattern completion capabilities of the system and rivalry dynamics for odor mixtures. PMID

  13. An efficient automated parameter tuning framework for spiking neural networks.

    Science.gov (United States)

    Carlson, Kristofor D; Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L

    2014-01-01

    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.

  14. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot.

    Science.gov (United States)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  15. Using Long-Short-Term-Memory Recurrent Neural Networks to Predict Aviation Engine Vibrations

    Science.gov (United States)

    ElSaid, AbdElRahman Ahmed

    This thesis examines building viable Recurrent Neural Networks (RNN) using Long Short Term Memory (LSTM) neurons to predict aircraft engine vibrations. The different networks are trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, and this database contains multiple types of engines. Further, LSTM RNNs provide a "memory" of the contribution of previous time series data which can further improve predictions of future vibration values. LSTM RNNs were used over traditional RNNs, as those suffer from vanishing/exploding gradients when trained with back propagation. The study managed to predict vibration values for 1, 5, 10, and 20 seconds in the future, with 2.84% 3.3%, 5.51% and 10.19% mean absolute error, respectively. These neural networks provide a promising means for the future development of warning systems so that suitable actions can be taken before the occurrence of excess vibration to avoid unfavorable situations during flight.

  16. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    Directory of Open Access Journals (Sweden)

    Eduard eGrinke

    2015-10-01

    Full Text Available Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments.

  17. Sensorless control for permanent magnet synchronous motor using a neural network based adaptive estimator

    Science.gov (United States)

    Kwon, Chung-Jin; Kim, Sung-Joong; Han, Woo-Young; Min, Won-Kyoung

    2005-12-01

    The rotor position and speed estimation of permanent-magnet synchronous motor(PMSM) was dealt with. By measuring the phase voltages and currents of the PMSM drive, two diagonally recurrent neural network(DRNN) based observers, a neural current observer and a neural velocity observer were developed. DRNN which has self-feedback of the hidden neurons ensures that the outputs of DRNN contain the whole past information of the system even if the inputs of DRNN are only the present states and inputs of the system. Thus the structure of DRNN may be simpler than that of feedforward and fully recurrent neural networks. If the backpropagation method was used for the training of the DRNN the problem of slow convergence arise. In order to reduce this problem, recursive prediction error(RPE) based learning method for the DRNN was presented. The simulation results show that the proposed approach gives a good estimation of rotor speed and position, and RPE based training has requires a shorter computation time compared to backpropagation based training.

  18. Delayed development of neural language organization in very preterm born children.

    Science.gov (United States)

    Mürner-Lavanchy, Ines; Steinlin, Maja; Kiefer, Claus; Weisstanner, Christian; Ritter, Barbara Catherine; Perrig, Walter; Everts, Regula

    2014-01-01

    This study investigates neural language organization in very preterm born children compared to control children and examines the relationship between language organization, age, and language performance. Fifty-six preterms and 38 controls (7-12 y) completed a functional magnetic resonance imaging language task. Lateralization and signal change were computed for language-relevant brain regions. Younger preterms showed a bilateral language network whereas older preterms revealed left-sided language organization. No age-related differences in language organization were observed in controls. Results indicate that preterms maintain atypical bilateral language organization longer than term born controls. This might reflect a delay of neural language organization due to very premature birth.

  19. New Angle on the Parton Distribution Functions: Self-Organizing Maps

    International Nuclear Information System (INIS)

    Honkanen, H.; Liuti, S.

    2009-01-01

    Neural network (NN) algorithms have been recently applied to construct Parton Distribution Function (PDF) parametrizations, providing an alternative to standard global fitting procedures. Here we explore a novel technique using Self-Organizing Maps (SOMs). SOMs are a class of clustering algorithms based on competitive learning among spatially-ordered neurons. We train our SOMs with stochastically generated PDF samples. On every optimization iteration the PDFs are clustered on the SOM according to a user-defined feature and the most promising candidates are used as a seed for the subsequent iteration using the topology of the map to guide the PDF generating process. Our goal is a fitting procedure that, at variance with the standard neural network approaches, will allow for an increased control of the systematic bias by enabling user interaction in the various stages of the process.

  20. Recurrent subcutaneous emphysema of the face: a challenging clinical problem.

    Science.gov (United States)

    Hojjati, Hossein; Davani, Sam Zeraatian Nejad; Johari, Hamed Ghoddusi

    2007-01-01

    In the neck or face, there are different causes for subcutaneous emphysema such as injury to the sinuses, the hypopharynx, the laryngotracheal complex, the pulmonary parenchyma, the esophagus or the presence of gas-forming organisms. However, factitious subcutaneous emphysema, a rare cause, must be considered in the differential diagnosis. In this clinical report, we discuss a 20-year-old girl who was under follow-up because of recurrent subcutaneous emphysema of the face and periorbital area. After 2 years of work-ups, including a period of close observation in the intensive care unit, self air injection by syringe was found as the cause of recurrent subcutaneous emphysema of the face, and the patient was labeled as having factitious recurrent subcutaneous emphysema. Therefore, when a patient presents with unexplained recurrent subcutaneous emphysema, one should suspect self-infliction and examine for puncture marks.

  1. Discriminative training of self-structuring hidden control neural models

    DEFF Research Database (Denmark)

    Sørensen, Helge Bjarup Dissing; Hartmann, Uwe; Hunnerup, Preben

    1995-01-01

    This paper presents a new training algorithm for self-structuring hidden control neural (SHC) models. The SHC models were trained non-discriminatively for speech recognition applications. Better recognition performance can generally be achieved, if discriminative training is applied instead. Thus...... we developed a discriminative training algorithm for SHC models, where each SHC model for a specific speech pattern is trained with utterances of the pattern to be recognized and with other utterances. The discriminative training of SHC neural models has been tested on the TIDIGITS database...

  2. A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Song, Qiankun; Wang, Zidong

    2007-01-01

    In this Letter, the analysis problem for the existence and stability of periodic solutions is investigated for a class of general discrete-time recurrent neural networks with time-varying delays. For the neural networks under study, a generalized activation function is considered, and the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. By employing the latest free-weighting matrix method, an appropriate Lyapunov-Krasovskii functional is constructed and several sufficient conditions are established to ensure the existence, uniqueness, and globally exponential stability of the periodic solution for the addressed neural network. The conditions are dependent on both the lower bound and upper bound of the time-varying time delays. Furthermore, the conditions are expressed in terms of the linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Two simulation examples are given to show the effectiveness and less conservatism of the proposed criteria

  3. Macromolecular target prediction by self-organizing feature maps.

    Science.gov (United States)

    Schneider, Gisbert; Schneider, Petra

    2017-03-01

    Rational drug discovery would greatly benefit from a more nuanced appreciation of the activity of pharmacologically active compounds against a diverse panel of macromolecular targets. Already, computational target-prediction models assist medicinal chemists in library screening, de novo molecular design, optimization of active chemical agents, drug re-purposing, in the spotting of potential undesired off-target activities, and in the 'de-orphaning' of phenotypic screening hits. The self-organizing map (SOM) algorithm has been employed successfully for these and other purposes. Areas covered: The authors recapitulate contemporary artificial neural network methods for macromolecular target prediction, and present the basic SOM algorithm at a conceptual level. Specifically, they highlight consensus target-scoring by the employment of multiple SOMs, and discuss the opportunities and limitations of this technique. Expert opinion: Self-organizing feature maps represent a straightforward approach to ligand clustering and classification. Some of the appeal lies in their conceptual simplicity and broad applicability domain. Despite known algorithmic shortcomings, this computational target prediction concept has been proven to work in prospective settings with high success rates. It represents a prototypic technique for future advances in the in silico identification of the modes of action and macromolecular targets of bioactive molecules.

  4. Design of a heart rate controller for treadmill exercise using a recurrent fuzzy neural network.

    Science.gov (United States)

    Lu, Chun-Hao; Wang, Wei-Cheng; Tai, Cheng-Chi; Chen, Tien-Chi

    2016-05-01

    In this study, we developed a computer controlled treadmill system using a recurrent fuzzy neural network heart rate controller (RFNNHRC). Treadmill speeds and inclines were controlled by corresponding control servo motors. The RFNNHRC was used to generate the control signals to automatically control treadmill speed and incline to minimize the user heart rate deviations from a preset profile. The RFNNHRC combines a fuzzy reasoning capability to accommodate uncertain information and an artificial recurrent neural network learning process that corrects for treadmill system nonlinearities and uncertainties. Treadmill speeds and inclines are controlled by the RFNNHRC to achieve minimal heart rate deviation from a pre-set profile using adjustable parameters and an on-line learning algorithm that provides robust performance against parameter variations. The on-line learning algorithm of RFNNHRC was developed and implemented using a dsPIC 30F4011 DSP. Application of the proposed control scheme to heart rate responses of runners resulted in smaller fluctuations than those produced by using proportional integra control, and treadmill speeds and inclines were smoother. The present experiments demonstrate improved heart rate tracking performance with the proposed control scheme. The RFNNHRC scheme with adjustable parameters and an on-line learning algorithm was applied to a computer controlled treadmill system with heart rate control during treadmill exercise. Novel RFNNHRC structure and controller stability analyses were introduced. The RFNNHRC were tuned using a Lyapunov function to ensure system stability. The superior heart rate control with the proposed RFNNHRC scheme was demonstrated with various pre-set heart rates. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Interpretation of Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Pedersen, Morten With; Larsen, Jan

    1997-01-01

    This paper addresses techniques for interpretation and characterization of trained recurrent nets for time series problems. In particular, we focus on assessment of effective memory and suggest an operational definition of memory. Further we discuss the evaluation of learning curves. Various nume...

  6. Where's the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network.

    Directory of Open Access Journals (Sweden)

    Christoph Hartmann

    2015-12-01

    Full Text Available Even in the absence of sensory stimulation the brain is spontaneously active. This background "noise" seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN, which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural

  7. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    Science.gov (United States)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  8. Development of a Real-Time Thermal Performance Diagnostic Monitoring system Using Self-Organizing Neural Network for Kori-2 Nuclear Power Unit

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Seong, Poong Hyun

    1996-01-01

    In this work, a PC-based thermal performance monitoring system is developed for the nuclear power plants. the system performs real-time thermal performance monitoring and diagnosis during plant operation. Specifically, a prototype for the Kori-2 nuclear power unit is developed and examined is very difficult because the system structure is highly complex and the components are very much inter-related. In this study, some major diagnostic performance parameters are selected in order to represent the thermal cycle effectively and to reduce the computing time. The Fuzzy ARTMAP, a self-organizing neural network, is used to recognize the characteristic pattern change of the performance parameters in abnormal situation. By examination, the algorithm is shown to be ale to detect abnormality and to identify the fault component or the change of system operation condition successfully. For the convenience of operators, a graphical user interface is also constructed in this work. 5 figs., 3 tabs., 11 refs. (Author)

  9. Application of recurrent neural networks for drought projections in California

    Science.gov (United States)

    Le, J. A.; El-Askary, H. M.; Allali, M.; Struppa, D. C.

    2017-05-01

    We use recurrent neural networks (RNNs) to investigate the complex interactions between the long-term trend in dryness and a projected, short but intense, period of wetness due to the 2015-2016 El Niño. Although it was forecasted that this El Niño season would bring significant rainfall to the region, our long-term projections of the Palmer Z Index (PZI) showed a continuing drought trend, contrasting with the 1998-1999 El Niño event. RNN training considered PZI data during 1896-2006 that was validated against the 2006-2015 period to evaluate the potential of extreme precipitation forecast. We achieved a statistically significant correlation of 0.610 between forecasted and observed PZI on the validation set for a lead time of 1 month. This gives strong confidence to the forecasted precipitation indicator. The 2015-2016 El Niño season proved to be relatively weak as compared with the 1997-1998, with a peak PZI anomaly of 0.242 standard deviations below historical averages, continuing drought conditions.

  10. Applications of self-organizing neural networks in virtual screening and diversity selection.

    Science.gov (United States)

    Selzer, Paul; Ertl, Peter

    2006-01-01

    Artificial neural networks provide a powerful technique for the analysis and modeling of nonlinear relationships between molecular structures and pharmacological activity. Many network types, including Kohonen and counterpropagation, also provide an intuitive method for the visual assessment of correspondence between the input and output data. This work shows how a combination of neural networks and radial distribution function molecular descriptors can be applied in various areas of industrial pharmaceutical research. These applications include the prediction of biological activity, the selection of screening candidates (cherry picking), and the extraction of representative subsets from large compound collections such as combinatorial libraries. The methods described have also been implemented as an easy-to-use Web tool, allowing chemists to perform interactive neural network experiments on the Novartis intranet.

  11. An adaptive PID like controller using mix locally recurrent neural network for robotic manipulator with variable payload.

    Science.gov (United States)

    Sharma, Richa; Kumar, Vikas; Gaur, Prerna; Mittal, A P

    2016-05-01

    Being complex, non-linear and coupled system, the robotic manipulator cannot be effectively controlled using classical proportional-integral-derivative (PID) controller. To enhance the effectiveness of the conventional PID controller for the nonlinear and uncertain systems, gains of the PID controller should be conservatively tuned and should adapt to the process parameter variations. In this work, a mix locally recurrent neural network (MLRNN) architecture is investigated to mimic a conventional PID controller which consists of at most three hidden nodes which act as proportional, integral and derivative node. The gains of the mix locally recurrent neural network based PID (MLRNNPID) controller scheme are initialized with a newly developed cuckoo search algorithm (CSA) based optimization method rather than assuming randomly. A sequential learning based least square algorithm is then investigated for the on-line adaptation of the gains of MLRNNPID controller. The performance of the proposed controller scheme is tested against the plant parameters uncertainties and external disturbances for both links of the two link robotic manipulator with variable payload (TL-RMWVP). The stability of the proposed controller is analyzed using Lyapunov stability criteria. A performance comparison is carried out among MLRNNPID controller, CSA optimized NNPID (OPTNNPID) controller and CSA optimized conventional PID (OPTPID) controller in order to establish the effectiveness of the MLRNNPID controller. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. The Neural Correlates Underlying Belief Reasoning for Self and for Others: Evidence from ERPs.

    Science.gov (United States)

    Jiang, Qin; Wang, Qi; Li, Peng; Li, Hong

    2016-01-01

    Belief reasoning is typical mental state reasoning in theory of mind (ToM). Although previous studies have explored the neural bases of belief reasoning, the neural correlates of belief reasoning for self and for others are rarely addressed. The decoupling mechanism of distinguishing the mental state of others from one's own is essential for ToM processing. To address the electrophysiological bases underlying the decoupling mechanism, the present event-related potential study compared the time course of neural activities associated with belief reasoning for self and for others when the belief belonging to self was consistent or inconsistent with others. Results showed that during a 450-600 ms period, belief reasoning for self elicited a larger late positive component (LPC) than for others when beliefs were inconsistent with each other. The LPC divergence is assumed to reflect the categorization of agencies in ToM processes.

  13. Neural Networks for Self-tuning Control Systems

    Directory of Open Access Journals (Sweden)

    A. Noriega Ponce

    2004-01-01

    Full Text Available In this paper, we presented a self-tuning control algorithm based on a three layers perceptron type neural network. The proposed algorithm is advantageous in the sense that practically a previous training of the net is not required and some changes in the set-point are generally enough to adjust the learning coefficient. Optionally, it is possible to introduce a self-tuning mechanism of the learning coefficient although by the moment it is not possible to give final conclusions about this possibility. The proposed algorithm has the special feature that the regulation error instead of the net output error is retropropagated for the weighting coefficients modifications. 

  14. USING STROKE-BASED OR CHARACTER-BASED SELF-ORGANIZING MAPS IN THE RECOGNITION OF ONLINE, CONNECTED CURSIVE SCRIPT

    NARCIS (Netherlands)

    SCHOMAKER, L

    Comparisons are made between a number of stroke-based and character-based recognizers of connected cursive script. In both approaches a Kohonen self-organizing neural network is used as a feature-vector quantizer. It is found that a ''best match only'' character-based recognizer performs better than

  15. Ferritin nanoparticles for improved self-renewal and differentiation of human neural stem cells.

    Science.gov (United States)

    Lee, Jung Seung; Yang, Kisuk; Cho, Ann-Na; Cho, Seung-Woo

    2018-01-01

    Biomaterials that promote the self-renewal ability and differentiation capacity of neural stem cells (NSCs) are desirable for improving stem cell therapy to treat neurodegenerative diseases. Incorporation of micro- and nanoparticles into stem cell culture has gained great attention for the control of stem cell behaviors, including proliferation and differentiation. In this study, ferritin, an iron-containing natural protein nanoparticle, was applied as a biomaterial to improve the self-renewal and differentiation of NSCs and neural progenitor cells (NPCs). Ferritin nanoparticles were added to NSC or NPC culture during cell growth, allowing for incorporation of ferritin nanoparticles during neurosphere formation. Compared to neurospheres without ferritin treatment, neurospheres with ferritin nanoparticles showed significantly promoted self-renewal and cell-cell interactions. When spontaneous differentiation of neurospheres was induced during culture without mitogenic factors, neuronal differentiation was enhanced in the ferritin-treated neurospheres. In conclusion, we found that natural nanoparticles can be used to improve the self-renewal ability and differentiation potential of NSCs and NPCs, which can be applied in neural tissue engineering and cell therapy for neurodegenerative diseases.

  16. Self-expandable metallic stents for patients with recurrent esophageal carcinoma after failure of primary chemoradiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Muto, Manabu; Ohtsu, Atsushi; Boku, Narikazu; Yoshida, Shigeaki [National Cancer Center, Kashiwa, Chiba (Japan). Hospital East; Miyata, Yoshinori; Shioyama, Yasukazu

    2001-06-01

    Recent advances in chemoradiotherapy for esophageal carcinoma have resulted in improved survival rates. However, there are few options for recurrent dysphagia due to refractory carcinoma after failure of primary chemoradiotherapy. The aim of this study was to evaluate the safety and efficacy of self-expandable metallic stent placement for patients with recurrent esophageal carcinoma where definitive chemoradiotherapy has failed. Thirteen consecutive patients with recurrent squamous cell carcinoma of the esophagus, in whom self-expandable metallic stents were placed after failure of primary chemoradiotherapy, were studied retrospectively. All patients had esophageal obstruction or malignant fistula. The oral alimentation status of nine of 13 patients (69%) improved after successful placement of the stent. Following placement of the stent, fever (>38 deg C) and severe chest pain occurred in 85% (11/13) of the patients. In all patients examined, C-reactive protein was elevated within 1 week of the operation. Esophageal perforation occurred in three patients. Stent-related mediastinitis and pneumonia developed in six (46%) and three (23%) patients, respectively. Seven of the 13 patients (54%) died of stent-related pulmonary complications. Although the placement of a self-expandable metallic stent for patients with recurrent esophageal carcinoma after failure of chemoradiotherapy improved their oral alimentation status, we found that this treatment increases the risk of life-threatening pulmonary complications. (author)

  17. Self-expandable metallic stents for patients with recurrent esophageal carcinoma after failure of primary chemoradiotherapy

    International Nuclear Information System (INIS)

    Muto, Manabu; Ohtsu, Atsushi; Boku, Narikazu; Yoshida, Shigeaki; Miyata, Yoshinori; Shioyama, Yasukazu

    2001-01-01

    Recent advances in chemoradiotherapy for esophageal carcinoma have resulted in improved survival rates. However, there are few options for recurrent dysphagia due to refractory carcinoma after failure of primary chemoradiotherapy. The aim of this study was to evaluate the safety and efficacy of self-expandable metallic stent placement for patients with recurrent esophageal carcinoma where definitive chemoradiotherapy has failed. Thirteen consecutive patients with recurrent squamous cell carcinoma of the esophagus, in whom self-expandable metallic stents were placed after failure of primary chemoradiotherapy, were studied retrospectively. All patients had esophageal obstruction or malignant fistula. The oral alimentation status of nine of 13 patients (69%) improved after successful placement of the stent. Following placement of the stent, fever (>38 deg C) and severe chest pain occurred in 85% (11/13) of the patients. In all patients examined, C-reactive protein was elevated within 1 week of the operation. Esophageal perforation occurred in three patients. Stent-related mediastinitis and pneumonia developed in six (46%) and three (23%) patients, respectively. Seven of the 13 patients (54%) died of stent-related pulmonary complications. Although the placement of a self-expandable metallic stent for patients with recurrent esophageal carcinoma after failure of chemoradiotherapy improved their oral alimentation status, we found that this treatment increases the risk of life-threatening pulmonary complications. (author)

  18. Wind Turbine Driving a PM Synchronous Generator Using Novel Recurrent Chebyshev Neural Network Control with the Ideal Learning Rate

    Directory of Open Access Journals (Sweden)

    Chih-Hong Lin

    2016-06-01

    Full Text Available A permanent magnet (PM synchronous generator system driven by wind turbine (WT, connected with smart grid via AC-DC converter and DC-AC converter, are controlled by the novel recurrent Chebyshev neural network (NN and amended particle swarm optimization (PSO to regulate output power and output voltage in two power converters in this study. Because a PM synchronous generator system driven by WT is an unknown non-linear and time-varying dynamic system, the on-line training novel recurrent Chebyshev NN control system is developed to regulate DC voltage of the AC-DC converter and AC voltage of the DC-AC converter connected with smart grid. Furthermore, the variable learning rate of the novel recurrent Chebyshev NN is regulated according to discrete-type Lyapunov function for improving the control performance and enhancing convergent speed. Finally, some experimental results are shown to verify the effectiveness of the proposed control method for a WT driving a PM synchronous generator system in smart grid.

  19. Germ layers, the neural crest and emergent organization in development and evolution.

    Science.gov (United States)

    Hall, Brian K

    2018-04-10

    Discovered in chick embryos by Wilhelm His in 1868 and named the neural crest by Arthur Milnes Marshall in 1879, the neural crest cells that arise from the neural folds have since been shown to differentiate into almost two dozen vertebrate cell types and to have played major roles in the evolution of such vertebrate features as bone, jaws, teeth, visceral (pharyngeal) arches, and sense organs. I discuss the discovery that ectodermal neural crest gave rise to mesenchyme and the controversy generated by that finding; the germ layer theory maintained that only mesoderm could give rise to mesenchyme. A second topic of discussion is germ layers (including the neural crest) as emergent levels of organization in animal development and evolution that facilitated major developmental and evolutionary change. The third topic is gene networks, gene co-option, and the evolution of gene-signaling pathways as key to developmental and evolutionary transitions associated with the origin and evolution of the neural crest and neural crest cells. © 2018 Wiley Periodicals, Inc.

  20. LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution.

    Science.gov (United States)

    Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu

    2018-09-01

    The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.

  1. Use of Recurrent Neural Networks for Strategic Data Mining of Sales

    OpenAIRE

    Vadhavkar, Sanjeev; Shanmugasundaram, Jayavel; Gupta, Amar; Prasad, M.V. Nagendra

    2002-01-01

    An increasing number of organizations are involved in the development of strategic information systems for effective linkages with their suppliers, customers, and other channel partners involved in transportation, distribution, warehousing and maintenance activities. An efficient inter-organizational inventory management system based on data mining techniques is a significant step in this direction. This paper discusses the use of neural network based data mining and knowledge discovery techn...

  2. Neural network recognition of mammographic lesions

    International Nuclear Information System (INIS)

    Oldham, W.J.B.; Downes, P.T.; Hunter, V.

    1987-01-01

    A method for recognition of mammographic lesions through the use of neural networks is presented. Neural networks have exhibited the ability to learn the shape andinternal structure of patterns. Digitized mammograms containing circumscribed and stelate lesions were used to train a feedfoward synchronous neural network that self-organizes to stable attractor states. Encoding of data for submission to the network was accomplished by performing a fractal analysis of the digitized image. This results in scale invariant representation of the lesions. Results are discussed

  3. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  4. Spatial Clockwork Recurrent Neural Network for Muscle Perimysium Segmentation.

    Science.gov (United States)

    Xie, Yuanpu; Zhang, Zizhao; Sapkota, Manish; Yang, Lin

    2016-10-01

    Accurate segmentation of perimysium plays an important role in early diagnosis of many muscle diseases because many diseases contain different perimysium inflammation. However, it remains as a challenging task due to the complex appearance of the perymisum morphology and its ambiguity to the background area. The muscle perimysium also exhibits strong structure spanned in the entire tissue, which makes it difficult for current local patch-based methods to capture this long-range context information. In this paper, we propose a novel spatial clockwork recurrent neural network (spatial CW-RNN) to address those issues. Specifically, we split the entire image into a set of non-overlapping image patches, and the semantic dependencies among them are modeled by the proposed spatial CW-RNN. Our method directly takes the 2D structure of the image into consideration and is capable of encoding the context information of the entire image into the local representation of each patch. Meanwhile, we leverage on the structured regression to assign one prediction mask rather than a single class label to each local patch, which enables both efficient training and testing. We extensively test our method for perimysium segmentation using digitized muscle microscopy images. Experimental results demonstrate the superiority of the novel spatial CW-RNN over other existing state of the arts.

  5. A review of organic and inorganic biomaterials for neural interfaces.

    Science.gov (United States)

    Fattahi, Pouria; Yang, Guang; Kim, Gloria; Abidian, Mohammad Reza

    2014-03-26

    Recent advances in nanotechnology have generated wide interest in applying nanomaterials for neural prostheses. An ideal neural interface should create seamless integration into the nervous system and performs reliably for long periods of time. As a result, many nanoscale materials not originally developed for neural interfaces become attractive candidates to detect neural signals and stimulate neurons. In this comprehensive review, an overview of state-of-the-art microelectrode technologies provided fi rst, with focus on the material properties of these microdevices. The advancements in electro active nanomaterials are then reviewed, including conducting polymers, carbon nanotubes, graphene, silicon nanowires, and hybrid organic-inorganic nanomaterials, for neural recording, stimulation, and growth. Finally, technical and scientific challenges are discussed regarding biocompatibility, mechanical mismatch, and electrical properties faced by these nanomaterials for the development of long-lasting functional neural interfaces.

  6. Orphan nuclear receptor TLX activates Wnt/β-catenin signalling to stimulate neural stem cell proliferation and self-renewal

    Science.gov (United States)

    Qu, Qiuhao; Sun, Guoqiang; Li, Wenwu; Yang, Su; Ye, Peng; Zhao, Chunnian; Yu, Ruth T.; Gage, Fred H.; Evans, Ronald M.; Shi, Yanhong

    2010-01-01

    The nuclear receptor TLX (also known as NR2E1) is essential for adult neural stem cell self-renewal; however, the molecular mechanisms involved remain elusive. Here we show that TLX activates the canonical Wnt/β-catenin pathway in adult mouse neural stem cells. Furthermore, we demonstrate that Wnt/β-catenin signalling is important in the proliferation and self-renewal of adult neural stem cells in the presence of epidermal growth factor and fibroblast growth factor. Wnt7a and active β-catenin promote neural stem cell self-renewal, whereas the deletion of Wnt7a or the lentiviral transduction of axin, a β-catenin inhibitor, led to decreased cell proliferation in adult neurogenic areas. Lentiviral transduction of active β-catenin led to increased numbers of type B neural stem cells in the subventricular zone of adult brains, whereas deletion of Wnt7a or TLX resulted in decreased numbers of neural stem cells retaining bromodeoxyuridine label in the adult brain. Both Wnt7a and active β-catenin significantly rescued a TLX (also known as Nr2e1) short interfering RNA-induced deficiency in neural stem cell proliferation. Lentiviral transduction of an active β-catenin increased cell proliferation in neurogenic areas of TLX-null adult brains markedly. These results strongly support the hypothesis that TLX acts through the Wnt/β-catenin pathway to regulate neural stem cell proliferation and self-renewal. Moreover, this study suggests that neural stem cells can promote their own self-renewal by secreting signalling molecules that act in an autocrine/paracrine mode. PMID:20010817

  7. Orphan nuclear receptor TLX activates Wnt/beta-catenin signalling to stimulate neural stem cell proliferation and self-renewal.

    Science.gov (United States)

    Qu, Qiuhao; Sun, Guoqiang; Li, Wenwu; Yang, Su; Ye, Peng; Zhao, Chunnian; Yu, Ruth T; Gage, Fred H; Evans, Ronald M; Shi, Yanhong

    2010-01-01

    The nuclear receptor TLX (also known as NR2E1) is essential for adult neural stem cell self-renewal; however, the molecular mechanisms involved remain elusive. Here we show that TLX activates the canonical Wnt/beta-catenin pathway in adult mouse neural stem cells. Furthermore, we demonstrate that Wnt/beta-catenin signalling is important in the proliferation and self-renewal of adult neural stem cells in the presence of epidermal growth factor and fibroblast growth factor. Wnt7a and active beta-catenin promote neural stem cell self-renewal, whereas the deletion of Wnt7a or the lentiviral transduction of axin, a beta-catenin inhibitor, led to decreased cell proliferation in adult neurogenic areas. Lentiviral transduction of active beta-catenin led to increased numbers of type B neural stem cells in the subventricular zone of adult brains, whereas deletion of Wnt7a or TLX resulted in decreased numbers of neural stem cells retaining bromodeoxyuridine label in the adult brain. Both Wnt7a and active beta-catenin significantly rescued a TLX (also known as Nr2e1) short interfering RNA-induced deficiency in neural stem cell proliferation. Lentiviral transduction of an active beta-catenin increased cell proliferation in neurogenic areas of TLX-null adult brains markedly. These results strongly support the hypothesis that TLX acts through the Wnt/beta-catenin pathway to regulate neural stem cell proliferation and self-renewal. Moreover, this study suggests that neural stem cells can promote their own self-renewal by secreting signalling molecules that act in an autocrine/paracrine mode.

  8. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks.

    Science.gov (United States)

    Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.

  9. Brain Basis of Self: Self-Organization and Lessons from Dreaming

    Directory of Open Access Journals (Sweden)

    David eKahn

    2013-07-01

    Full Text Available Through dreaming a different facet of the self is created as a result of a self-organizing process in the brain. Self-organization in biological systems often happens as an answer to an environmental change for which the existing system cannot cope; self-organization creates a system that can cope in the newly changed environment. In dreaming, self-organization serves the function of organizing disparate memories into a dream since the dreamer herself is not able to control how individual memories become weaved into a dream. The self-organized dream provides, thereby, a wide repertoire of experiences; this expanded repertoire of experience results in an expansion of the self beyond that obtainable when awake. Since expression of the self is associated with activity in specific areas of the brain, the article also discusses the brain basis of the self by reviewing studies of brain injured patients, discussing brain imaging studies in normal brain functioning when focused, when daydreaming and when asleep and dreaming.

  10. Novel delay-distribution-dependent stability analysis for continuous-time recurrent neural networks with stochastic delay

    International Nuclear Information System (INIS)

    Wang Shen-Quan; Feng Jian; Zhao Qing

    2012-01-01

    In this paper, the problem of delay-distribution-dependent stability is investigated for continuous-time recurrent neural networks (CRNNs) with stochastic delay. Different from the common assumptions on time delays, it is assumed that the probability distribution of the delay taking values in some intervals is known a priori. By making full use of the information concerning the probability distribution of the delay and by using a tighter bounding technique (the reciprocally convex combination method), less conservative asymptotic mean-square stable sufficient conditions are derived in terms of linear matrix inequalities (LMIs). Two numerical examples show that our results are better than the existing ones. (general)

  11. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.

    Directory of Open Access Journals (Sweden)

    H Francis Song

    2016-02-01

    Full Text Available The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, "trained" networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale's principle, which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural

  12. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework

    Science.gov (United States)

    Wang, Xiao-Jing

    2016-01-01

    The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, “trained” networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale’s principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity

  13. PERAMALAN KONSUMSI LISTRIK JANGKA PENDEK DENGAN ARIMA MUSIMAN GANDA DAN ELMAN-RECURRENT NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Suhartono Suhartono

    2009-07-01

    Full Text Available Neural network (NN is one of many method used to predict the electricity consumption per hour in many countries. NN method which is used in many previous studies is Feed-Forward Neural Network (FFNN or Autoregressive Neural Network(AR-NN. AR-NN model is not able to capture and explain the effect of moving average (MA order on a time series of data. This research was conducted with the purpose of reviewing the application of other types of NN, that is Elman-Recurrent Neural Network (Elman-RNN which could explain MA order effect and compare the result of prediction accuracy with multiple seasonal ARIMA (Autoregressive Integrated Moving Average models. As a case study, we used data electricity consumption per hour in Mengare Gresik. Result of analysis showed that the best of double seasonal Arima models suited to short-term forecasting in the case study data is ARIMA([1,2,3,4,6,7,9,10,14,21,33],1,8(0,1,124 (1,1,0168. This model produces a white noise residuals, but it does not have a normal distribution due to suspected outlier. Outlier detection in iterative produce 14 innovation outliers. There are 4 inputs of Elman-RNN network that were examined and tested for forecasting the data, the input according to lag Arima, input such as lag Arima plus 14 dummy outlier, inputs are the lag-multiples of 24 up to lag 480, and the inputs are lag 1 and lag multiples of 24+1. All of four network uses one hidden layer with tangent sigmoid activation function and one output with a linear function. The result of comparative forecast accuracy through value of MAPE out-sample showed that the fourth networks, namely Elman-RNN (22, 3, 1, is the best model for forecasting electricity consumption per hour in short term in Mengare Gresik.

  14. Sociocultural patterning of neural activity during self-reflection.

    Science.gov (United States)

    Ma, Yina; Bang, Dan; Wang, Chenbo; Allen, Micah; Frith, Chris; Roepstorff, Andreas; Han, Shihui

    2014-01-01

    Western cultures encourage self-construals independent of social contexts, whereas East Asian cultures foster interdependent self-construals that rely on how others perceive the self. How are culturally specific self-construals mediated by the human brain? Using functional magnetic resonance imaging, we monitored neural responses from adults in East Asian (Chinese) and Western (Danish) cultural contexts during judgments of social, mental and physical attributes of themselves and public figures to assess cultural influences on self-referential processing of personal attributes in different dimensions. We found that judgments of self vs a public figure elicited greater activation in the medial prefrontal cortex (mPFC) in Danish than in Chinese participants regardless of attribute dimensions for judgments. However, self-judgments of social attributes induced greater activity in the temporoparietal junction (TPJ) in Chinese than in Danish participants. Moreover, the group difference in TPJ activity was mediated by a measure of a cultural value (i.e. interdependence of self-construal). Our findings suggest that individuals in different sociocultural contexts may learn and/or adopt distinct strategies for self-reflection by changing the weight of the mPFC and TPJ in the social brain network.

  15. Self-Tuning Vibration Control of a Rotational Flexible Timoshenko Arm Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Minoru Sasaki

    2012-01-01

    Full Text Available A self-tuning vibration control of a rotational flexible arm using neural networks is presented. To the self-tuning control system, the control scheme consists of gain tuning neural networks and a variable-gain feedback controller. The neural networks are trained so as to make the root moment zero. In the process, the neural networks learn the optimal gain of the feedback controller. The feedback controller is designed based on Lyapunov's direct method. The feedback control of the vibration of the flexible system is derived by considering the time rate of change of the total energy of the system. This approach has the advantage over the conventional methods in the respect that it allows one to deal directly with the system's partial differential equations without resorting to approximations. Numerical and experimental results for the vibration control of a rotational flexible arm are discussed. It verifies that the proposed control system is effective at controlling flexible dynamical systems.

  16. Tracking Control Based on Recurrent Neural Networks for Nonlinear Systems with Multiple Inputs and Unknown Deadzone

    Directory of Open Access Journals (Sweden)

    J. Humberto Pérez-Cruz

    2012-01-01

    Full Text Available This paper deals with the problem of trajectory tracking for a broad class of uncertain nonlinear systems with multiple inputs each one subject to an unknown symmetric deadzone. On the basis of a model of the deadzone as a combination of a linear term and a disturbance-like term, a continuous-time recurrent neural network is directly employed in order to identify the uncertain dynamics. By using a Lyapunov analysis, the exponential convergence of the identification error to a bounded zone is demonstrated. Subsequently, by a proper control law, the state of the neural network is compelled to follow a bounded reference trajectory. This control law is designed in such a way that the singularity problem is conveniently avoided and the exponential convergence to a bounded zone of the difference between the state of the neural identifier and the reference trajectory can be proven. Thus, the exponential convergence of the tracking error to a bounded zone and the boundedness of all closed-loop signals can be guaranteed. One of the main advantages of the proposed strategy is that the controller can work satisfactorily without any specific knowledge of an upper bound for the unmodeled dynamics and/or the disturbance term.

  17. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    International Nuclear Information System (INIS)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-01-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing

  18. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin, E-mail: xmli@cqu.edu.cn [Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044 (China); College of Automation, Chongqing University, Chongqing 400044 (China)

    2015-11-15

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  19. Cognitive reactivity, self-depressed associations, and the recurrence of depression.

    Science.gov (United States)

    Elgersma, Hermien J; de Jong, Peter J; van Rijsbergen, Gerard D; Kok, Gemma D; Burger, Huibert; van der Does, Willem; Penninx, Brenda W J H; Bockting, Claudi L H

    2015-09-01

    Mixed evidence exists regarding the role of cognitive reactivity (CR; cognitive responsivity to a negative mood) as a risk factor for recurrences of depression. One explanation for the mixed evidence may lie in the number of previous depressive episodes. Heightened CR may be especially relevant as a risk factor for the development of multiple depressive episodes and less so for a single depressive episode. In addition, it is theoretically plausible but not yet tested that the relationship between CR and number of episodes is moderated by the strength of automatic depression-related self-associations. To investigate (i) the strength of CR in remitted depressed individuals with a history of a single vs. multiple episodes, and (ii) the potentially moderating role of automatic negative self-associations in the relationship between the number of episodes and CR. Cross-sectional analysis of data obtained in a cohort study (Study 1) and during baseline assessments in two clinical trials (Study 2). Study 1 used data from the Netherlands Study of Depression and Anxiety (NESDA) and compared never-depressed participants (n=901) with remitted participants with either a single (n=336) or at least 2 previous episodes (n=273). Study 2 included only remitted participants with at least two previous episodes (n=273). The Leiden Index of Depression Sensitivity Revised (LEIDS-R) was used to index CR and an Implicit Association Test (IAT) to measure implicit self-associations. In Study 1, remitted depressed participants with multiple episodes had significantly higher CR than those with a single or no previous episode. The remitted individuals with multiple episodes of Study 2 had even higher CR scores than those of Study 1. Within the group of individuals with multiple episodes, CR was not heightened as a function of the number of episodes, even if individual differences in automatic negative self-associations were taken into account. The study employed a cross-sectional design, which

  20. Neural Mechanism for Mirrored Self-face Recognition.

    Science.gov (United States)

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-09-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a "virtual mirror" system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. © The Author 2014. Published by Oxford University Press.

  1. The impact of cultural differences in self-representation on the neural substrates of posttraumatic stress disorder

    Directory of Open Access Journals (Sweden)

    Belinda J. Liddell

    2016-06-01

    Full Text Available A significant body of literature documents the neural mechanisms involved in the development and maintenance of posttraumatic stress disorder (PTSD. However, there is very little empirical work considering the influence of culture on these underlying mechanisms. Accumulating cultural neuroscience research clearly indicates that cultural differences in self-representation modulate many of the same neural processes proposed to be aberrant in PTSD. The objective of this review paper is to consider how culture may impact on the neural mechanisms underlying PTSD. We first outline five key affective and cognitive functions and their underlying neural correlates that have been identified as being disrupted in PTSD: (1 fear dysregulation; (2 attentional biases to threat; (3 emotion and autobiographical memory; (4 self-referential processing; and (5 attachment and interpersonal processing. Second, we consider prominent cultural theories and review the empirical research that has demonstrated the influence of cultural variations in self-representation on the neural substrates of these same five affective and cognitive functions. Finally, we propose a conceptual model that suggests that these five processes have major relevance to considering how culture may influence the neural processes underpinning PTSD. Highlights of the article:

  2. The impact of cultural differences in self-representation on the neural substrates of posttraumatic stress disorder.

    Science.gov (United States)

    Liddell, Belinda J; Jobson, Laura

    2016-01-01

    A significant body of literature documents the neural mechanisms involved in the development and maintenance of posttraumatic stress disorder (PTSD). However, there is very little empirical work considering the influence of culture on these underlying mechanisms. Accumulating cultural neuroscience research clearly indicates that cultural differences in self-representation modulate many of the same neural processes proposed to be aberrant in PTSD. The objective of this review paper is to consider how culture may impact on the neural mechanisms underlying PTSD. We first outline five key affective and cognitive functions and their underlying neural correlates that have been identified as being disrupted in PTSD: (1) fear dysregulation; (2) attentional biases to threat; (3) emotion and autobiographical memory; (4) self-referential processing; and (5) attachment and interpersonal processing. Second, we consider prominent cultural theories and review the empirical research that has demonstrated the influence of cultural variations in self-representation on the neural substrates of these same five affective and cognitive functions. Finally, we propose a conceptual model that suggests that these five processes have major relevance to considering how culture may influence the neural processes underpinning PTSD.

  3. Self-generation of controller of an underwater robot with neural network

    International Nuclear Information System (INIS)

    Suto, T.; Ura, T.

    1994-01-01

    A self-organizing controller system is constructed based on artificial neural networks and applied to constant altitude swimming of the autonomous underwater robot PTEROA 150. The system consists of a controller and a forward model which calculates the values for evaluation as a result of control. Some methods are introduced for quick and appropriate adjustment of the controller network. Modification of the controller network is executed based on error-back-propagation method utilizing the forward model network. The forward model is divided into three sub-networks which represent dynamics of the vehicle, estimation of relative position to the seabed and calculation of the altitude. The proposed adaptive system is demonstrated in computer simulations where objective of a vehicle is keeping a constant altitude from seabed which is constituted of triangular ridges

  4. Atmospheric Convective Organization: Self-Organized Criticality or Homeostasis?

    Science.gov (United States)

    Yano, Jun-Ichi

    2015-04-01

    Atmospheric convection has a tendency organized on a hierarchy of scales ranging from the mesoscale to the planetary scales, with the latter especially manifested by the Madden-Julian oscillation. The present talk examines two major possible mechanisms of self-organization identified in wider literature from a phenomenological thermodynamic point of view by analysing a planetary-scale cloud-resolving model simulation. The first mechanism is self-organized criticality. A saturation tendency of precipitation rate with the increasing column-integrated water, reminiscence of critical phenomena, indicates self-organized criticality. The second is a self-regulation mechanism that is known as homeostasis in biology. A thermodynamic argument suggests that such self-regulation maintains the column-integrated water below a threshold by increasing the precipitation rate. Previous analyses of both observational data as well as cloud-resolving model (CRM) experiments give mixed results. A satellite data analysis suggests self-organized criticality. Some observational data as well as CRM experiments support homeostasis. Other analyses point to a combination of these two interpretations. In this study, a CRM experiment over a planetary-scale domain with a constant sea-surface temperature is analyzed. This analysis shows that the relation between the column-integrated total water and precipitation suggests self-organized criticality, whereas the one between the column-integrated water vapor and precipitation suggests homeostasis. The concurrent presence of these two mechanisms are further elaborated by detailed statistical and budget analyses. These statistics are scale invariant, reflecting a spatial scaling of precipitation processes. These self-organization mechanisms are most likely be best theoretically understood by the energy cycle of the convective systems consisting of the kinetic energy and the cloud-work function. The author has already investigated the behavior of this

  5. Neural differences in self-perception during illness and after weight-recovery in anorexia nervosa.

    Science.gov (United States)

    McAdams, Carrie J; Jeon-Slaughter, Haekyung; Evans, Siobahn; Lohrenz, Terry; Montague, P Read; Krawczyk, Daniel C

    2016-11-01

    Anorexia nervosa (AN) is a severe mental illness characterized by problems with self-perception. Whole-brain neural activations in healthy women, women with AN and women in long-term weight recovery following AN were compared using two functional magnetic resonance imaging tasks probing different aspects of self-perception. The Social Identity-V2 task involved consideration about oneself and others using socially descriptive adjectives. Both the ill and weight-recovered women with AN engaged medial prefrontal cortex less than healthy women for self-relevant cognitions, a potential biological trait difference. Weight-recovered women also activated the inferior frontal gyri and dorsal anterior cingulate more for direct self-evaluations than for reflected self-evaluations, unlike both other groups, suggesting that recovery may include compensatory neural changes related to social perspectives. The Faces task compared viewing oneself to a stranger. Participants with AN showed elevated activity in the bilateral fusiform gyri for self-images, unlike the weight-recovered and healthy women, suggesting cognitive distortions about physical appearance are a state rather than trait problem in this disease. Because both ill and recovered women showed neural differences related to social self-perception, but only recovered women differed when considering social perspectives, these neurocognitive targets may be particularly important for treatment. © The Author (2016). Published by Oxford University Press.

  6. The morphological classification of normal and abnormal red blood cell using Self Organizing Map

    Science.gov (United States)

    Rahmat, R. F.; Wulandari, F. S.; Faza, S.; Muchtar, M. A.; Siregar, I.

    2018-02-01

    Blood is an essential component of living creatures in the vascular space. For possible disease identification, it can be tested through a blood test, one of which can be seen from the form of red blood cells. The normal and abnormal morphology of the red blood cells of a patient is very helpful to doctors in detecting a disease. With the advancement of digital image processing technology can be used to identify normal and abnormal blood cells of a patient. This research used self-organizing map method to classify the normal and abnormal form of red blood cells in the digital image. The use of self-organizing map neural network method can be implemented to classify the normal and abnormal form of red blood cells in the input image with 93,78% accuracy testing.

  7. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    Science.gov (United States)

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  8. Self-construal differences in neural responses to negative social cues.

    Science.gov (United States)

    Liddell, Belinda J; Felmingham, Kim L; Das, Pritha; Whitford, Thomas J; Malhi, Gin S; Battaglini, Eva; Bryant, Richard A

    2017-10-01

    Cultures differ substantially in representations of the self. Whereas individualistic cultural groups emphasize an independent self, reflected in processing biases towards centralized salient objects, collectivistic cultures are oriented towards an interdependent self, attending to contextual associations between visual cues. It is unknown how these perceptual biases may affect brain activity in response to negative social cues. Moreover, while some studies have shown that individual differences in self-construal moderate cultural group comparisons, few have examined self-construal differences separate to culture. To investigate these issues, a final sample of a group of healthy participants high in trait levels of collectivistic self-construal (n=16) and individualistic self-construal (n=19), regardless of cultural background, completed a negative social cue evaluation task designed to engage face/object vs context-specific neural processes whilst undergoing fMRI scanning. Between-group analyses revealed that the collectivistic group exclusively engaged the parahippocampal gyrus (parahippocampal place area) - a region critical to contextual integration - during negative face processing - suggesting compensatory activations when contextual information was missing. The collectivist group also displayed enhanced negative context dependent brain activity involving the left superior occipital gyrus/cuneus and right anterior insula. By contrast, the individualistic group did not engage object or localized face processing regions as predicted, but rather demonstrated heightened appraisal and self-referential activations in medial prefrontal and temporoparietal regions to negative contexts - again suggesting compensatory processes when focal cues were absent. While individualists also appeared more sensitive to negative faces in the scenes, activating the right middle cingulate gyrus, dorsal prefrontal and parietal activations, this activity was observed relative to the

  9. Psychopathic traits linked to alterations in neural activity during personality judgments of self and others.

    Science.gov (United States)

    Deming, Philip; Philippi, Carissa L; Wolf, Richard C; Dargis, Monika; Kiehl, Kent A; Koenigs, Michael

    2018-01-01

    Psychopathic individuals are notorious for their grandiose sense of self-worth and disregard for the welfare of others. One potential psychological mechanism underlying these traits is the relative consideration of "self" versus "others". Here we used task-based functional magnetic resonance imaging (fMRI) to identify neural responses during personality trait judgments about oneself and a familiar other in a sample of adult male incarcerated offenders ( n  = 57). Neural activity was regressed on two clusters of psychopathic traits: Factor 1 (e.g., egocentricity and lack of empathy) and Factor 2 (e.g., impulsivity and irresponsibility). Contrary to our hypotheses, Factor 1 scores were not significantly related to neural activity during self- or other-judgments. However, Factor 2 traits were associated with diminished activation to self-judgments, in relation to other-judgments, in bilateral posterior cingulate cortex and right temporoparietal junction. These findings highlight cortical regions associated with a dimension of social-affective cognition that may underlie psychopathic individuals' impulsive traits.

  10. Fast convergence of spike sequences to periodic patterns in recurrent networks

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.

    2002-01-01

    The dynamical attractors are thought to underlie many biological functions of recurrent neural networks. Here we show that stable periodic spike sequences with precise timings are the attractors of the spiking dynamics of recurrent neural networks with global inhibition. Almost all spike sequences converge within a finite number of transient spikes to these attractors. The convergence is fast, especially when the global inhibition is strong. These results support the possibility that precise spatiotemporal sequences of spikes are useful for information encoding and processing in biological neural networks

  11. From self-organization to self-assembly: a new materialism?

    Science.gov (United States)

    Vincent, Bernadette Bensaude

    2016-09-01

    While self-organization has been an integral part of academic discussions about the distinctive features of living organisms, at least since Immanuel Kant's Critique of Judgement, the term 'self-assembly' has only been used for a few decades as it became a hot research topic with the emergence of nanotechnology. Could it be considered as an attempt at reducing vital organization to a sort of assembly line of molecules? Considering the context of research on self-assembly I argue that the shift of attention from self-organization to self-assembly does not really challenge the boundary between chemistry and biology. Self-assembly was first and foremost investigated in an engineering context as a strategy for manufacturing without human intervention and did not raise new perspectives on the emergence of vital organization itself. However self-assembly implies metaphysical assumptions that this paper tries to disentangle. It first describes the emergence of self-assembly as a research field in the context of materials science and nanotechnology. The second section outlines the metaphysical implications and will emphasize a sharp contrast between the ontology underlying two practices of self-assembly developed under the umbrella of synthetic biology. And unexpectedly, we shall see that chemists are less on the reductionist side than most synthetic biologists. Finally, the third section ventures some reflections on the kind of design involved in self-assembly practices.

  12. Emergence of unstable itinerant orbits in a recurrent neural network model

    International Nuclear Information System (INIS)

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2005-01-01

    A recurrent neural network model with time delay is investigated by numerical methods. The model functions as both conventional associative memory and also enables us to embed a new kind of memory attractor that cannot be realized in models without time delay, for example chain-ring attractors. This is attributed to the fact that the time delay extends the available state space dimension. The difference between the basin structures of chain-ring attractors and of isolated cycle attractors is investigated with respect to the two attractor pattern sets, random memory patterns and designed memory patterns with intended structures. Compared to isolated attractors with random memory patterns, the basins of chain-ring attractors are reduced considerably. Computer experiments confirm that the basin volume of each embedded chain-ring attractor shrinks and the emergence of unstable itinerant orbits in the outer state space of the memory attractor basins is discovered. The instability of such itinerant orbits is investigated. Results show that a 1-bit difference in initial conditions does not exceed 10% of a total dimension within 100 updating steps

  13. Equivalence of Equilibrium Propagation and Recurrent Backpropagation

    OpenAIRE

    Scellier, Benjamin; Bengio, Yoshua

    2017-01-01

    Recurrent Backpropagation and Equilibrium Propagation are algorithms for fixed point recurrent neural networks which differ in their second phase. In the first phase, both algorithms converge to a fixed point which corresponds to the configuration where the prediction is made. In the second phase, Recurrent Backpropagation computes error derivatives whereas Equilibrium Propagation relaxes to another nearby fixed point. In this work we establish a close connection between these two algorithms....

  14. Dynamical system with plastic self-organized velocity field as an alternative conceptual model of a cognitive system.

    Science.gov (United States)

    Janson, Natalia B; Marsden, Christopher J

    2017-12-05

    It is well known that architecturally the brain is a neural network, i.e. a collection of many relatively simple units coupled flexibly. However, it has been unclear how the possession of this architecture enables higher-level cognitive functions, which are unique to the brain. Here, we consider the brain from the viewpoint of dynamical systems theory and hypothesize that the unique feature of the brain, the self-organized plasticity of its architecture, could represent the means of enabling the self-organized plasticity of its velocity vector field. We propose that, conceptually, the principle of cognition could amount to the existence of appropriate rules governing self-organization of the velocity field of a dynamical system with an appropriate account of stimuli. To support this hypothesis, we propose a simple non-neuromorphic mathematical model with a plastic self-organized velocity field, which has no prototype in physical world. This system is shown to be capable of basic cognition, which is illustrated numerically and with musical data. Our conceptual model could provide an additional insight into the working principles of the brain. Moreover, hardware implementations of plastic velocity fields self-organizing according to various rules could pave the way to creating artificial intelligence of a novel type.

  15. Application of Recurrent Neural Networks on El Nino Impact on California Climate

    Science.gov (United States)

    Le, J.; El-Askary, H. M.; Allai, M.

    2017-12-01

    Following our successful paper on the application for the El Nino season of 2015-2016 over Southern California, we use recurrent neural networks (RNNs) to investigate the complex interactions between the long-term trend in dryness and a projected, short but intense, period of wetness due to the 2015-2016 El Niño. Although it was forecasted that this El Niño season would bring significant rainfall to the region, our long-term projections of the Palmer Z Index (PZI) showed a continuing drought trend. We achieved a statistically significant correlation of 0.610 between forecasted and observed PZI on the validation set for a lead time of 1 month. This gives strong confidence to the forecasted precipitation indicator. These predictions were bourne out in the resulting data. This paper details the expansion of our system to the climate of the entire California climate as a whole, dealing with inter-relationships and spatial variations within the state.

  16. Discrete-time recurrent neural networks with time-varying delays: Exponential stability analysis

    International Nuclear Information System (INIS)

    Liu, Yurong; Wang, Zidong; Serrano, Alan; Liu, Xiaohui

    2007-01-01

    This Letter is concerned with the analysis problem of exponential stability for a class of discrete-time recurrent neural networks (DRNNs) with time delays. The delay is of the time-varying nature, and the activation functions are assumed to be neither differentiable nor strict monotonic. Furthermore, the description of the activation functions is more general than the recently commonly used Lipschitz conditions. Under such mild conditions, we first prove the existence of the equilibrium point. Then, by employing a Lyapunov-Krasovskii functional, a unified linear matrix inequality (LMI) approach is developed to establish sufficient conditions for the DRNNs to be globally exponentially stable. It is shown that the delayed DRNNs are globally exponentially stable if a certain LMI is solvable, where the feasibility of such an LMI can be easily checked by using the numerically efficient Matlab LMI Toolbox. A simulation example is presented to show the usefulness of the derived LMI-based stability condition

  17. Laparoscopic sacrocolpopexy for recurrent pelvic organ prolapse after failed transvaginal polypropylene mesh surgery.

    Science.gov (United States)

    Schmid, Corina; O'Rourke, Peter; Maher, Christopher

    2013-05-01

    A prospective case series to assess the safety and efficacy of laparoscopic sacrocolpopexy for the surgical management of recurrent pelvic organ prolapse (POP) after transvaginal polypropylene mesh prolapse surgery. Between January and December 2010, women with post-hysterectomy recurrent prolapse (≥ stage 2 POP-Q) after transvaginal polypropylene mesh prolapse surgery were included. Perioperative morbidity and short-term complications were recorded and evaluated. Surgical outcomes were objectively assessed utilising the Pelvic Organ Prolapse Quantification system (POP-Q), the validated, condition-specific Australian Pelvic Floor Questionnaire (APFQ) and the Patient Global Impression of Improvement (PGI-I) at 12 months. All 16 women in this study had undergone surgery with trocar-guided transvaginal polypropylene mesh kits. In 75% the recurrent prolapse affected the compartment of prior mesh surgery with the anterior (81%) and apical (75%) compartment prolapse predominating. At a mean follow-up of 12 months, all women had resolution of awareness of prolapse, had transvaginal mesh surgery is feasible and safe. Further widespread evaluation is required.

  18. Innovative Mechanism of Rural Organization Based on Self-Organization

    OpenAIRE

    Wang, Xing jin; Gao, Bing

    2011-01-01

    The paper analyzes the basic situation for the formation of innovative rural organizations with the form of self-organization; revels the features of self-organization, including the four aspects of openness of rural organization, innovation of rural organization is far away from equilibrium, the non-linear response mechanism of rural organization innovation and the random rise and fall of rural organization innovation. The evolution mechanism of rural organization innovation is reveled accor...

  19. Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity

    Directory of Open Access Journals (Sweden)

    Benjamin eDummer

    2014-09-01

    Full Text Available A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, J. Comp. Neurosci. 2000 and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide excellent approximations to the autocorrelation of spike trains in the recurrent network.

  20. Profile of the biodiesel B100 commercialized in the region of Londrina: application of artificial neural networks of the type self organizing maps

    Directory of Open Access Journals (Sweden)

    Vilson Machado de Campos Filho

    2015-10-01

    Full Text Available The 97 samples were grouped according to the year of analysis. For each year, letters from A to D were attributed, between 2010 and 2013; A (33 B (25 C (24 and D (15. The parameters of compliance previously analyzed are those established by the National Agency of Petroleum, Natural Gas and Biofuels (ANP, through resolution ANP 07/2008. The parameters analyzed were density, flash point, peroxide and acid value. The observed values were presented to Artificial Neural Network (ANN Self Organizing MAP (SOM in order to classify, by physical-chemical properties, each sample from year of production. The ANN was trained on different days and randomly divided samples into two groups, training and test set. It was found that SOM network differentiated samples by the year and the compliance parameters, allowing to identify that the density and the flash point were the most significant compliance parameters, so good for the distinction and classification of these samples.

  1. Innovative Mechanism of Rural Organization Based on Self-Organization

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The paper analyzes the basic situation of the formation of innovative rural organizations with the form of self-organization;reveals the features of self-organization,including the four aspects of openness of rural organization,innovation of rural organization far away from equilibrium,the non-linear response mechanism of rural organization innovation and the random rise and fall of rural organization innovation.The evolution mechanism of rural organization innovation is revealed according to the growth stage,the ideal stage,the decline and the fall stage.The paper probes into the basic restriction mechanism of the self-organization evaluation of rural organization from three aspects,including target recognition,path dependence and knowledge sharing.The basic measures on cultivating the innovative mechanism of rural organization are put forward.Firstly,constructing the dissipative structure of rural organization innovation;secondly,cultivating the dynamic study capability of rural organization innovation system;thirdly,selecting the step-by-step evolution strategy of rural organization innovation system.

  2. Protein-Protein Interaction Article Classification Using a Convolutional Recurrent Neural Network with Pre-trained Word Embeddings.

    Science.gov (United States)

    Matos, Sérgio; Antunes, Rui

    2017-12-13

    Curation of protein interactions from scientific articles is an important task, since interaction networks are essential for the understanding of biological processes associated with disease or pharmacological action for example. However, the increase in the number of publications that potentially contain relevant information turns this into a very challenging and expensive task. In this work we used a convolutional recurrent neural network for identifying relevant articles for extracting information regarding protein interactions. Using the BioCreative III Article Classification Task dataset, we achieved an area under the precision-recall curve of 0.715 and a Matthew's correlation coefficient of 0.600, which represents an improvement over previous works.

  3. Complex-valued neural networks advances and applications

    CERN Document Server

    Hirose, Akira

    2013-01-01

    Presents the latest advances in complex-valued neural networks by demonstrating the theory in a wide range of applications Complex-valued neural networks is a rapidly developing neural network framework that utilizes complex arithmetic, exhibiting specific characteristics in its learning, self-organizing, and processing dynamics. They are highly suitable for processing complex amplitude, composed of amplitude and phase, which is one of the core concepts in physical systems to deal with electromagnetic, light, sonic/ultrasonic waves as well as quantum waves, namely, electron and

  4. Self-Organization and the Self-Assembling Process in Tissue Engineering

    Science.gov (United States)

    Eswaramoorthy, Rajalakshmanan; Hadidi, Pasha; Hu, Jerry C.

    2015-01-01

    In recent years, the tissue engineering paradigm has shifted to include a new and growing subfield of scaffoldless techniques which generate self-organizing and self-assembling tissues. This review aims to provide a cogent description of this relatively new research area, with special emphasis on applications toward clinical use and research models. Particular emphasis is placed on providing clear definitions of self-organization and the self-assembling process, as delineated from other scaffoldless techniques in tissue engineering and regenerative medicine. Significantly, during formation, self-organizing and self-assembling tissues display biological processes similar to those that occur in vivo. These help lead to the recapitulation of native tissue morphological structure and organization. Notably, functional properties of these tissues also approach native tissue values; some of these engineered tissues are already in clinical trials. This review aims to provide a cohesive summary of work in this field, and to highlight the potential of self-organization and the self-assembling process to provide cogent solutions to current intractable problems in tissue engineering. PMID:23701238

  5. Self-organized Learning Environments

    DEFF Research Database (Denmark)

    Dalsgaard, Christian; Mathiasen, Helle

    2007-01-01

    system actively. The two groups used the system in their own way to support their specific activities and ways of working. The paper concludes that self-organized learning environments can strengthen the development of students’ academic as well as social qualifications. Further, the paper identifies......The purpose of the paper is to discuss the potentials of using a conference system in support of a project based university course. We use the concept of a self-organized learning environment to describe the shape of the course. In the paper we argue that educational technology, such as conference...... systems, has a potential to support students’ development of self-organized learning environments and facilitate self-governed activities in higher education. The paper is based on an empirical study of two project groups’ use of a conference system. The study showed that the students used the conference...

  6. Identification of Jets Containing $b$-Hadrons with Recurrent Neural Networks at the ATLAS Experiment

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    A novel $b$-jet identification algorithm is constructed with a Recurrent Neural Network (RNN) at the ATLAS experiment at the CERN Large Hadron Collider. The RNN based $b$-tagging algorithm processes charged particle tracks associated to jets without reliance on secondary vertex finding, and can augment existing secondary-vertex based taggers. In contrast to traditional impact-parameter-based $b$-tagging algorithms which assume that tracks associated to jets are independent from each other, the RNN based $b$-tagging algorithm can exploit the spatial and kinematic correlations between tracks which are initiated from the same $b$-hadrons. This new approach also accommodates an extended set of input variables. This note presents the expected performance of the RNN based $b$-tagging algorithm in simulated $t \\bar t$ events at $\\sqrt{s}=13$ TeV.

  7. H∞ state estimation for discrete-time memristive recurrent neural networks with stochastic time-delays

    Science.gov (United States)

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2016-07-01

    This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.

  8. Robust stability analysis of Takagi—Sugeno uncertain stochastic fuzzy recurrent neural networks with mixed time-varying delays

    International Nuclear Information System (INIS)

    Ali, M. Syed

    2011-01-01

    In this paper, the global stability of Takagi—Sugeno (TS) uncertain stochastic fuzzy recurrent neural networks with discrete and distributed time-varying delays (TSUSFRNNs) is considered. A novel LMI-based stability criterion is obtained by using Lyapunov functional theory to guarantee the asymptotic stability of TSUSFRNNs. The proposed stability conditions are demonstrated through numerical examples. Furthermore, the supplementary requirement that the time derivative of time-varying delays must be smaller than one is removed. Comparison results are demonstrated to show that the proposed method is more able to guarantee the widest stability region than the other methods available in the existing literature. (general)

  9. Indirect intelligent sliding mode control of a shape memory alloy actuated flexible beam using hysteretic recurrent neural networks

    International Nuclear Information System (INIS)

    Hannen, Jennifer C; Buckner, Gregory D; Crews, John H

    2012-01-01

    This paper introduces an indirect intelligent sliding mode controller (IISMC) for shape memory alloy (SMA) actuators, specifically a flexible beam deflected by a single offset SMA tendon. The controller manipulates applied voltage, which alters SMA tendon temperature to track reference bending angles. A hysteretic recurrent neural network (HRNN) captures the nonlinear, hysteretic relationship between SMA temperature and bending angle. The variable structure control strategy provides robustness to model uncertainties and parameter variations, while effectively compensating for system nonlinearities, achieving superior tracking compared to an optimized PI controller. (paper)

  10. PREFACE: Self-organized nanostructures

    Science.gov (United States)

    Rousset, Sylvie; Ortega, Enrique

    2006-04-01

    In order to fabricate ordered arrays of nanostructures, two different strategies might be considered. The `top-down' approach consists of pushing the limit of lithography techniques down to the nanometre scale. However, beyond 10 nm lithography techniques will inevitably face major intrinsic limitations. An alternative method for elaborating ultimate-size nanostructures is based on the reverse `bottom-up' approach, i.e. building up nanostructures (and eventually assemble them to form functional circuits) from individual atoms or molecules. Scanning probe microscopies, including scanning tunnelling microscopy (STM) invented in 1982, have made it possible to create (and visualize) individual structures atom by atom. However, such individual atomic manipulation is not suitable for industrial applications. Self-assembly or self-organization of nanostructures on solid surfaces is a bottom-up approach that allows one to fabricate and assemble nanostructure arrays in a one-step process. For applications, such as high density magnetic storage, self-assembly appears to be the simplest alternative to lithography for massive, parallel fabrication of nanostructure arrays with regular sizes and spacings. These are also necessary for investigating the physical properties of individual nanostructures by means of averaging techniques, i.e. all those using light or particle beams. The state-of-the-art and the current developments in the field of self-organization and physical properties of assembled nanostructures are reviewed in this issue of Journal of Physics: Condensed Matter. The papers have been selected from among the invited and oral presentations of the recent summer workshop held in Cargese (Corsica, France, 17-23 July 2005). All authors are world-renowned in the field. The workshop has been funded by the Marie Curie Actions: Marie Curie Conferences and Training Courses series named `NanosciencesTech' supported by the VI Framework Programme of the European Community, by

  11. Psychopathic traits linked to alterations in neural activity during personality judgments of self and others

    Directory of Open Access Journals (Sweden)

    Philip Deming

    Full Text Available Psychopathic individuals are notorious for their grandiose sense of self-worth and disregard for the welfare of others. One potential psychological mechanism underlying these traits is the relative consideration of “self” versus “others”. Here we used task-based functional magnetic resonance imaging (fMRI to identify neural responses during personality trait judgments about oneself and a familiar other in a sample of adult male incarcerated offenders (n = 57. Neural activity was regressed on two clusters of psychopathic traits: Factor 1 (e.g., egocentricity and lack of empathy and Factor 2 (e.g., impulsivity and irresponsibility. Contrary to our hypotheses, Factor 1 scores were not significantly related to neural activity during self- or other-judgments. However, Factor 2 traits were associated with diminished activation to self-judgments, in relation to other-judgments, in bilateral posterior cingulate cortex and right temporoparietal junction. These findings highlight cortical regions associated with a dimension of social-affective cognition that may underlie psychopathic individuals' impulsive traits. Keywords: Psychopathy, fMRI, Social cognition, Self-referential processing, Emotion, Psychopathology

  12. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.

    Directory of Open Access Journals (Sweden)

    Gabriel Koch Ocker

    2015-08-01

    Full Text Available The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

  13. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.

    Science.gov (United States)

    Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent

    2015-08-01

    The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

  14. Self-Organization in Embedded Real-Time Systems

    CERN Document Server

    Brinkschulte, Uwe; Rettberg, Achim

    2013-01-01

    This book describes the emerging field of self-organizing, multicore, distributed and real-time embedded systems.  Self-organization of both hardware and software can be a key technique to handle the growing complexity of modern computing systems. Distributed systems running hundreds of tasks on dozens of processors, each equipped with multiple cores, requires self-organization principles to ensure efficient and reliable operation. This book addresses various, so-called Self-X features such as self-configuration, self-optimization, self-adaptation, self-healing and self-protection. Presents open components for embedded real-time adaptive and self-organizing applications; Describes innovative techniques in: scheduling, memory management, quality of service, communications supporting organic real-time applications; Covers multi-/many-core embedded systems supporting real-time adaptive systems and power-aware, adaptive hardware and software systems; Includes case studies of open embedded real-time self-organizi...

  15. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  17. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    Directory of Open Access Journals (Sweden)

    Daniel Durstewitz

    2017-06-01

    Full Text Available The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast maximum-likelihood estimation framework for PLRNNs that may enable to recover

  18. Synchronization of chaotic systems and identification of nonlinear systems by using recurrent hierarchical type-2 fuzzy neural networks.

    Science.gov (United States)

    Mohammadzadeh, Ardashir; Ghaemi, Sehraneh

    2015-09-01

    This paper proposes a novel approach for training of proposed recurrent hierarchical interval type-2 fuzzy neural networks (RHT2FNN) based on the square-root cubature Kalman filters (SCKF). The SCKF algorithm is used to adjust the premise part of the type-2 FNN and the weights of defuzzification and the feedback weights. The recurrence property in the proposed network is the output feeding of each membership function to itself. The proposed RHT2FNN is employed in the sliding mode control scheme for the synchronization of chaotic systems. Unknown functions in the sliding mode control approach are estimated by RHT2FNN. Another application of the proposed RHT2FNN is the identification of dynamic nonlinear systems. The effectiveness of the proposed network and its learning algorithm is verified by several simulation examples. Furthermore, the universal approximation of RHT2FNNs is also shown. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Recurrent neural network based hybrid model for reconstructing gene regulatory network.

    Science.gov (United States)

    Raza, Khalid; Alam, Mansaf

    2016-10-01

    One of the exciting problems in systems biology research is to decipher how genome controls the development of complex biological system. The gene regulatory networks (GRNs) help in the identification of regulatory interactions between genes and offer fruitful information related to functional role of individual gene in a cellular system. Discovering GRNs lead to a wide range of applications, including identification of disease related pathways providing novel tentative drug targets, helps to predict disease response, and also assists in diagnosing various diseases including cancer. Reconstruction of GRNs from available biological data is still an open problem. This paper proposes a recurrent neural network (RNN) based model of GRN, hybridized with generalized extended Kalman filter for weight update in backpropagation through time training algorithm. The RNN is a complex neural network that gives a better settlement between biological closeness and mathematical flexibility to model GRN; and is also able to capture complex, non-linear and dynamic relationships among variables. Gene expression data are inherently noisy and Kalman filter performs well for estimation problem even in noisy data. Hence, we applied non-linear version of Kalman filter, known as generalized extended Kalman filter, for weight update during RNN training. The developed model has been tested on four benchmark networks such as DNA SOS repair network, IRMA network, and two synthetic networks from DREAM Challenge. We performed a comparison of our results with other state-of-the-art techniques which shows superiority of our proposed model. Further, 5% Gaussian noise has been induced in the dataset and result of the proposed model shows negligible effect of noise on results, demonstrating the noise tolerance capability of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Neural networks for perception human and machine perception

    CERN Document Server

    Wechsler, Harry

    1991-01-01

    Neural Networks for Perception, Volume 1: Human and Machine Perception focuses on models for understanding human perception in terms of distributed computation and examples of PDP models for machine perception. This book addresses both theoretical and practical issues related to the feasibility of both explaining human perception and implementing machine perception in terms of neural network models. The book is organized into two parts. The first part focuses on human perception. Topics on network model ofobject recognition in human vision, the self-organization of functional architecture in t

  1. A Model of Self-Organizing Head-Centered Visual Responses in Primate Parietal Areas

    Science.gov (United States)

    Mender, Bedeho M. W.; Stringer, Simon M.

    2013-01-01

    We present a hypothesis for how head-centered visual representations in primate parietal areas could self-organize through visually-guided learning, and test this hypothesis using a neural network model. The model consists of a competitive output layer of neurons that receives afferent synaptic connections from a population of input neurons with eye position gain modulated retinal receptive fields. The synaptic connections in the model are trained with an associative trace learning rule which has the effect of encouraging output neurons to learn to respond to subsets of input patterns that tend to occur close together in time. This network architecture and synaptic learning rule is hypothesized to promote the development of head-centered output neurons during periods of time when the head remains fixed while the eyes move. This hypothesis is demonstrated to be feasible, and each of the core model components described is tested and found to be individually necessary for successful self-organization. PMID:24349064

  2. Using LSTM recurrent neural networks for monitoring the LHC superconducting magnets

    Science.gov (United States)

    Wielgosz, Maciej; Skoczeń, Andrzej; Mertik, Matej

    2017-09-01

    The superconducting LHC magnets are coupled with an electronic monitoring system which records and analyzes voltage time series reflecting their performance. A currently used system is based on a range of preprogrammed triggers which launches protection procedures when a misbehavior of the magnets is detected. All the procedures used in the protection equipment were designed and implemented according to known working scenarios of the system and are updated and monitored by human operators. This paper proposes a novel approach to monitoring and fault protection of the Large Hadron Collider (LHC) superconducting magnets which employs state-of-the-art Deep Learning algorithms. Consequently, the authors of the paper decided to examine the performance of LSTM recurrent neural networks for modeling of voltage time series of the magnets. In order to address this challenging task different network architectures and hyper-parameters were used to achieve the best possible performance of the solution. The regression results were measured in terms of RMSE for different number of future steps and history length taken into account for the prediction. The best result of RMSE = 0 . 00104 was obtained for a network of 128 LSTM cells within the internal layer and 16 steps history buffer.

  3. A recurrent neural network for classification of unevenly sampled variable stars

    Science.gov (United States)

    Naul, Brett; Bloom, Joshua S.; Pérez, Fernando; van der Walt, Stéfan

    2018-02-01

    Astronomical surveys of celestial sources produce streams of noisy time series measuring flux versus time (`light curves'). Unlike in many other physical domains, however, large (and source-specific) temporal gaps in data arise naturally due to intranight cadence choices as well as diurnal and seasonal constraints1-5. With nightly observations of millions of variable stars and transients from upcoming surveys4,6, efficient and accurate discovery and classification techniques on noisy, irregularly sampled data must be employed with minimal human-in-the-loop involvement. Machine learning for inference tasks on such data traditionally requires the laborious hand-coding of domain-specific numerical summaries of raw data (`features')7. Here, we present a novel unsupervised autoencoding recurrent neural network8 that makes explicit use of sampling times and known heteroskedastic noise properties. When trained on optical variable star catalogues, this network produces supervised classification models that rival other best-in-class approaches. We find that autoencoded features learned in one time-domain survey perform nearly as well when applied to another survey. These networks can continue to learn from new unlabelled observations and may be used in other unsupervised tasks, such as forecasting and anomaly detection.

  4. Self-reported empathy and neural activity during action imitation and observation in schizophrenia

    OpenAIRE

    Horan, William P.; Iacoboni, Marco; Cross, Katy A.; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K.; Green, Michael F.

    2014-01-01

    Introduction: Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. Methods: 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, o...

  5. Different neural processes accompany self-recognition in photographs across the lifespan: an ERP study using dizygotic twins.

    Science.gov (United States)

    Butler, David L; Mattingley, Jason B; Cunnington, Ross; Suddendorf, Thomas

    2013-01-01

    Our appearance changes over time, yet we can recognize ourselves in photographs from across the lifespan. Researchers have extensively studied self-recognition in photographs and have proposed that specific neural correlates are involved, but few studies have examined self-recognition using images from different periods of life. Here we compared ERP responses to photographs of participants when they were 5-15, 16-25, and 26-45 years old. We found marked differences between the responses to photographs from these time periods in terms of the neural markers generally assumed to reflect (i) the configural processing of faces (i.e., the N170), (ii) the matching of the currently perceived face to a representation already stored in memory (i.e., the P250), and (iii) the retrieval of information about the person being recognized (i.e., the N400). There was no uniform neural signature of visual self-recognition. To test whether there was anything specific to self-recognition in these brain responses, we also asked participants to identify photographs of their dizygotic twins taken from the same time periods. Critically, this allowed us to minimize the confounding effects of exposure, for it is likely that participants have been similarly exposed to each other's faces over the lifespan. The same pattern of neural response emerged with only one exception: the neural marker reflecting the retrieval of mnemonic information (N400) differed across the lifespan for self but not for twin. These results, as well as our novel approach using twins and photographs from across the lifespan, have wide-ranging consequences for the study of self-recognition and the nature of our personal identity through time.

  6. Different neural processes accompany self-recognition in photographs across the lifespan: an ERP study using dizygotic twins.

    Directory of Open Access Journals (Sweden)

    David L Butler

    Full Text Available Our appearance changes over time, yet we can recognize ourselves in photographs from across the lifespan. Researchers have extensively studied self-recognition in photographs and have proposed that specific neural correlates are involved, but few studies have examined self-recognition using images from different periods of life. Here we compared ERP responses to photographs of participants when they were 5-15, 16-25, and 26-45 years old. We found marked differences between the responses to photographs from these time periods in terms of the neural markers generally assumed to reflect (i the configural processing of faces (i.e., the N170, (ii the matching of the currently perceived face to a representation already stored in memory (i.e., the P250, and (iii the retrieval of information about the person being recognized (i.e., the N400. There was no uniform neural signature of visual self-recognition. To test whether there was anything specific to self-recognition in these brain responses, we also asked participants to identify photographs of their dizygotic twins taken from the same time periods. Critically, this allowed us to minimize the confounding effects of exposure, for it is likely that participants have been similarly exposed to each other's faces over the lifespan. The same pattern of neural response emerged with only one exception: the neural marker reflecting the retrieval of mnemonic information (N400 differed across the lifespan for self but not for twin. These results, as well as our novel approach using twins and photographs from across the lifespan, have wide-ranging consequences for the study of self-recognition and the nature of our personal identity through time.

  7. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    Science.gov (United States)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  8. Distributed representations of action sequences in anterior cingulate cortex: A recurrent neural network approach.

    Science.gov (United States)

    Shahnazian, Danesh; Holroyd, Clay B

    2018-02-01

    Anterior cingulate cortex (ACC) has been the subject of intense debate over the past 2 decades, but its specific computational function remains controversial. Here we present a simple computational model of ACC that incorporates distributed representations across a network of interconnected processing units. Based on the proposal that ACC is concerned with the execution of extended, goal-directed action sequences, we trained a recurrent neural network to predict each successive step of several sequences associated with multiple tasks. In keeping with neurophysiological observations from nonhuman animals, the network yields distributed patterns of activity across ACC neurons that track the progression of each sequence, and in keeping with human neuroimaging data, the network produces discrepancy signals when any step of the sequence deviates from the predicted step. These simulations illustrate a novel approach for investigating ACC function.

  9. Neural model of gene regulatory network: a survey on supportive meta-heuristics.

    Science.gov (United States)

    Biswas, Surama; Acharyya, Sriyankar

    2016-06-01

    Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here.

  10. Challenging emotional prejudice by changing self-concept: priming independent self-construal reduces racial in-group bias in neural responses to other's pain.

    Science.gov (United States)

    Wang, Chenbo; Wu, Bing; Liu, Yi; Wu, Xinhuai; Han, Shihui

    2015-09-01

    Humans show stronger empathy for in-group compared with out-group members' suffering and help in-group members more than out-group members. Moreover, the in-group bias in empathy and parochial altruism tend to be more salient in collectivistic than individualistic cultures. This work tested the hypothesis that modifying self-construals, which differentiate between collectivistic and individualistic cultural orientations, affects in-group bias in empathy for perceived own-race vs other-race pain. By scanning adults using functional magnetic resonance imaging, we found stronger neural activities in the mid-cingulate, left insula and supplementary motor area (SMA) in response to racial in-group compared with out-group members' pain after participants had been primed with interdependent self-construals. However, the racial in-group bias in neural responses to others' pain in the left SMA, mid-cingulate cortex and insula was significantly reduced by priming independent self-construals. Our findings suggest that shifting an individual's self-construal leads to changes of his/her racial in-group bias in neural responses to others' suffering. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  11. Stability switches, oscillatory multistability, and spatio-temporal patterns of nonlinear oscillations in recurrently delay coupled neural networks.

    Science.gov (United States)

    Song, Yongli; Makarov, Valeri A; Velarde, Manuel G

    2009-08-01

    A model of time-delay recurrently coupled spatially segregated neural assemblies is here proposed. We show that it operates like some of the hierarchical architectures of the brain. Each assembly is a neural network with no delay in the local couplings between the units. The delay appears in the long range feedforward and feedback inter-assemblies communications. Bifurcation analysis of a simple four-units system in the autonomous case shows the richness of the dynamical behaviors in a biophysically plausible parameter region. We find oscillatory multistability, hysteresis, and stability switches of the rest state provoked by the time delay. Then we investigate the spatio-temporal patterns of bifurcating periodic solutions by using the symmetric local Hopf bifurcation theory of delay differential equations and derive the equation describing the flow on the center manifold that enables us determining the direction of Hopf bifurcations and stability of the bifurcating periodic orbits. We also discuss computational properties of the system due to the delay when an external drive of the network mimicks external sensory input.

  12. [Neural basis of self-face recognition: social aspects].

    Science.gov (United States)

    Sugiura, Motoaki

    2012-07-01

    Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success.

  13. Classification and source determination of medium petroleum distillates by chemometric and artificial neural networks: a self organizing feature approach.

    Science.gov (United States)

    Mat-Desa, Wan N S; Ismail, Dzulkiflee; NicDaeid, Niamh

    2011-10-15

    Three different medium petroleum distillate (MPD) products (white spirit, paint brush cleaner, and lamp oil) were purchased from commercial stores in Glasgow, Scotland. Samples of 10, 25, 50, 75, 90, and 95% evaporated product were prepared, resulting in 56 samples in total which were analyzed using gas chromatography-mass spectrometry. Data sets from the chromatographic patterns were examined and preprocessed for unsupervised multivariate analyses using principal component analysis (PCA), hierarchical cluster analysis (HCA), and a self organizing feature map (SOFM) artificial neural network. It was revealed that data sets comprised of higher boiling point hydrocarbon compounds provided a good means for the classification of the samples and successfully linked highly weathered samples back to their unevaporated counterpart in every case. The classification abilities of SOFM were further tested and validated for their predictive abilities where one set of weather data in each case was withdrawn from the sample set and used as a test set of the retrained network. This revealed SOFM to be an outstanding mechanism for sample discrimination and linkage over the more conventional PCA and HCA methods often suggested for such data analysis. SOFM also has the advantage of providing additional information through the evaluation of component planes facilitating the investigation of underlying variables that account for the classification. © 2011 American Chemical Society

  14. Artificial neural network and falls in community-dwellers: a new approach to identify the risk of recurrent falling?

    Science.gov (United States)

    Kabeshova, Anastasiia; Launay, Cyrille P; Gromov, Vasilii A; Annweiler, Cédric; Fantino, Bruno; Beauchet, Olivier

    2015-04-01

    Identification of the risk of recurrent falls is complex in older adults. The aim of this study was to examine the efficiency of 3 artificial neural networks (ANNs: multilayer perceptron [MLP], modified MLP, and neuroevolution of augmenting topologies [NEAT]) for the classification of recurrent fallers and nonrecurrent fallers using a set of clinical characteristics corresponding to risk factors of falls measured among community-dwelling older adults. Based on a cross-sectional design, 3289 community-dwelling volunteers aged 65 and older were recruited. Age, gender, body mass index (BMI), number of drugs daily taken, use of psychoactive drugs, diphosphonate, calcium, vitamin D supplements and walking aid, fear of falling, distance vision score, Timed Up and Go (TUG) score, lower-limb proprioception, handgrip strength, depressive symptoms, cognitive disorders, and history of falls were recorded. Participants were separated into 2 groups based on the number of falls that occurred over the past year: 0 or 1 fall and 2 or more falls. In addition, total population was separated into training and testing subgroups for ANN analysis. Among 3289 participants, 18.9% (n = 622) were recurrent fallers. NEAT, using 15 clinical characteristics (ie, use of walking aid, fear of falling, use of calcium, depression, use of vitamin D supplements, female, cognitive disorders, BMI 4, vision score 9 seconds, handgrip strength score ≤29 (N), and age ≥75 years), showed the best efficiency for identification of recurrent fallers, sensitivity (80.42%), specificity (92.54%), positive predictive value (84.38), negative predictive value (90.34), accuracy (88.39), and Cohen κ (0.74), compared with MLP and modified MLP. NEAT, using a set of 15 clinical characteristics, was an efficient ANN for the identification of recurrent fallers in older community-dwellers. Copyright © 2015 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  15. Energy Complexity of Recurrent Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří

    2014-01-01

    Roč. 26, č. 5 (2014), s. 953-973 ISSN 0899-7667 R&D Projects: GA ČR GAP202/10/1333 Institutional support: RVO:67985807 Keywords : neural network * finite automaton * energy complexity * optimal size Subject RIV: IN - Informatics, Computer Science Impact factor: 2.207, year: 2014

  16. Self-organizing map classifier for stressed speech recognition

    Science.gov (United States)

    Partila, Pavol; Tovarek, Jaromir; Voznak, Miroslav

    2016-05-01

    This paper presents a method for detecting speech under stress using Self-Organizing Maps. Most people who are exposed to stressful situations can not adequately respond to stimuli. Army, police, and fire department occupy the largest part of the environment that are typical of an increased number of stressful situations. The role of men in action is controlled by the control center. Control commands should be adapted to the psychological state of a man in action. It is known that the psychological changes of the human body are also reflected physiologically, which consequently means the stress effected speech. Therefore, it is clear that the speech stress recognizing system is required in the security forces. One of the possible classifiers, which are popular for its flexibility, is a self-organizing map. It is one type of the artificial neural networks. Flexibility means independence classifier on the character of the input data. This feature is suitable for speech processing. Human Stress can be seen as a kind of emotional state. Mel-frequency cepstral coefficients, LPC coefficients, and prosody features were selected for input data. These coefficients were selected for their sensitivity to emotional changes. The calculation of the parameters was performed on speech recordings, which can be divided into two classes, namely the stress state recordings and normal state recordings. The benefit of the experiment is a method using SOM classifier for stress speech detection. Results showed the advantage of this method, which is input data flexibility.

  17. A canonical neural mechanism for behavioral variability

    Science.gov (United States)

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-05-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5-6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these `universal' statistics.

  18. Self-organization phenomena in plasma physics

    International Nuclear Information System (INIS)

    Sanduloviciu, M.; Popescu, S.

    2001-01-01

    The self-assembling in nature and laboratory of structures in systems away from thermodynamic equilibrium is one of the problems that mostly fascinates the scientists working in all branches of science. In this context a substantial progress has been obtained by investigating the appearance of spatial and spatiotemporal patterns in plasma. These experiments revealed the presence of a scenario of self-organization able to suggest an answer to the central problem of the 'Science of Complexity', why matter transits spontaneously from a disordered into an ordered state? Based on this scenario of self-organization we present arguments proving the possibility to explain the challenging problems of nonequilibrium physics in general. These problems refer to: (i) genuine origin of phase transitions observed in gaseous conductors and semiconductors; (ii) the elucidation of the role played by self-organization in the simulation of oscillations; (iii) the physical basis of anomalous transport of matter and energy with special reference to the possibilities of improving the economical performance of fusion devices; (iv) the possibility to use self-confined gaseous space charged configurations as an alternative to the magnetically confined plasma used at present in fusion devices. In other branches of sciences, as for instance in Biology, the self-organization scenario reveals a new insight into a mechanism able to explain the appearance of the simplest possible space charge configuration able to evolve, under suitable conditions, into prebiotic structures. Referring to phenomena observed in nature, the same self-organization scenario suggests plausible answers to the appearance of ball lightening but also to the origin of the flickering phenomena observed in the light emission of the Sun and stars. For theory the described self-organization scenario offers a new physical basis for many problems of nonlinear science not solved yet and also a new model for the so-called 'self

  19. Neural activity in the reward-related brain regions predicts implicit self-esteem: A novel validity test of psychological measures using neuroimaging.

    Science.gov (United States)

    Izuma, Keise; Kennedy, Kate; Fitzjohn, Alexander; Sedikides, Constantine; Shibata, Kazuhisa

    2018-03-01

    Self-esteem, arguably the most important attitudes an individual possesses, has been a premier research topic in psychology for more than a century. Following a surge of interest in implicit attitude measures in the 90s, researchers have tried to assess self-esteem implicitly to circumvent the influence of biases inherent in explicit measures. However, the validity of implicit self-esteem measures remains elusive. Critical tests are often inconclusive, as the validity of such measures is examined in the backdrop of imperfect behavioral measures. To overcome this serious limitation, we tested the neural validity of the most widely used implicit self-esteem measure, the implicit association test (IAT). Given the conceptualization of self-esteem as attitude toward the self, and neuroscience findings that the reward-related brain regions represent an individual's attitude or preference for an object when viewing its image, individual differences in implicit self-esteem should be associated with neural signals in the reward-related regions during passive-viewing of self-face (the most obvious representation of the self). Using multi-voxel pattern analysis (MVPA) on functional MRI (fMRI) data, we demonstrate that the neural signals in the reward-related regions were robustly associated with implicit (but not explicit) self-esteem, thus providing unique evidence for the neural validity of the self-esteem IAT. In addition, both implicit and explicit self-esteem were related, although differently, to neural signals in regions involved in self-processing. Our finding highlights the utility of neuroscience methods in addressing fundamental psychological questions and providing unique insights into important psychological constructs. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Cognitive-Behavioral Guided Self-Help for the Treatment of Recurrent Binge Eating

    Science.gov (United States)

    Striegel-Moore, Ruth H.; Wilson, G. Terence; DeBar, Lynn; Perrin, Nancy; Lynch, Frances; Rosselli, Francine; Kraemer, Helena C.

    2010-01-01

    Objective Despite proven efficacy of cognitive-behavioral therapy (CBT) for treating eating disorders with binge eating as the core symptom, few patients receive CBT in clinical practice. Our blended efficacy-effectiveness study sought to evaluate whether a manual-based guided self-help form of CBT (CBT-GSH), delivered in 8 sessions in a Health Maintenance Organization setting over a 12-week period by masters level interventionists, is more effective than treatment as usual (TAU). Method In all, 123 individuals (mean age = 37.2, 91.9% female, 96.7% non-Hispanic White) were randomized, including 10.6% with bulimia nervosa (BN), 48% with Binge Eating Disorder (BED), and 41.4% with recurrent binge eating in the absence of BN or BED. Baseline, post-treatment, and 6- and 12 month follow-up data were used in intent-to-treat analyses. At 12-month follow-up, CBT-GSH resulted in greater abstinence from binge eating (64.2%) than TAU (44.6%, Number Needed to Treat = 5), as measured by the Eating Disorder Examination (EDE, Fairburn & Cooper, 1993). Secondary outcomes reflected greater improvements in the CBT-GSH group in dietary restraint (d = .30), eating-, shape-, and weight concern (d’s = .54, 1.01, .49) (measured by the EDE-Questionnaire, respectively, Fairburn & Beglin, 2008), depression (d = .56) (Beck Depression Inventory, Beck, Steer, & Garbin, 1988), and social adjustment (d = .58) (Work and Social Adjustment Scale, Mundt, Marks, Shear, & Greist, 2002), but not weight change. Conclusions CBT-GSH is a viable first-line treatment option for the majority of patients with recurrent binge eating who do not meet diagnostic criteria for BN or anorexia nervosa. PMID:20515207

  1. A statistical framework for evaluating neural networks to predict recurrent events in breast cancer

    Science.gov (United States)

    Gorunescu, Florin; Gorunescu, Marina; El-Darzi, Elia; Gorunescu, Smaranda

    2010-07-01

    Breast cancer is the second leading cause of cancer deaths in women today. Sometimes, breast cancer can return after primary treatment. A medical diagnosis of recurrent cancer is often a more challenging task than the initial one. In this paper, we investigate the potential contribution of neural networks (NNs) to support health professionals in diagnosing such events. The NN algorithms are tested and applied to two different datasets. An extensive statistical analysis has been performed to verify our experiments. The results show that a simple network structure for both the multi-layer perceptron and radial basis function can produce equally good results, not all attributes are needed to train these algorithms and, finally, the classification performances of all algorithms are statistically robust. Moreover, we have shown that the best performing algorithm will strongly depend on the features of the datasets, and hence, there is not necessarily a single best classifier.

  2. Non-Taylor magnetohydrodynamic self-organization

    International Nuclear Information System (INIS)

    Zhu, Shao-ping; Horiuchi, Ritoku; Sato, Tetsuya.

    1994-10-01

    A self-organization process in a plasma with a finite pressure is investigated by means of a three-dimensional magnetohydrodynamic simulation. It is demonstrated that a non-Taylor finite β self-organized state is realized in which a perpendicular component of the electric current is generated and the force-free(parallel) current decreases until they reach to almost the same level. The self-organized state is described by an MHD force-balance relation, namely, j perpendicular = B x ∇p/B·B and j parallel = μB where μ is not a constant, and the pressure structure resembles the structure of the toroidal magnetic field intensity. Unless an anomalous perpendicular thermal conduction arises, the plasma cannot relax to a Taylor state but to a non-Taylor (non-force-free) self-organized state. This state becomes more prominent for a weaker resistivity condition. The non-Taylor state has a rather universal property, for example, independence of the initial β value. Another remarkable finding is that the Taylor's conjecture of helicity conservation is, in a strict sense, not valid. The helicity dissipation occurs and its rate slows down critically in accordance with the stepwise relaxation of the magnetic energy. It is confirmed that the driven magnetic reconnection caused by the nonlinearly excited plasma kink flows plays the leading role in all of these key features of the non-Taylor self-organization. (author)

  3. De-identification of clinical notes via recurrent neural network and conditional random field.

    Science.gov (United States)

    Liu, Zengjian; Tang, Buzhou; Wang, Xiaolong; Chen, Qingcai

    2017-11-01

    De-identification, identifying information from data, such as protected health information (PHI) present in clinical data, is a critical step to enable data to be shared or published. The 2016 Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-scale and RDOC Individualized Domains (N-GRID) clinical natural language processing (NLP) challenge contains a de-identification track in de-identifying electronic medical records (EMRs) (i.e., track 1). The challenge organizers provide 1000 annotated mental health records for this track, 600 out of which are used as a training set and 400 as a test set. We develop a hybrid system for the de-identification task on the training set. Firstly, four individual subsystems, that is, a subsystem based on bidirectional LSTM (long-short term memory, a variant of recurrent neural network), a subsystem-based on bidirectional LSTM with features, a subsystem based on conditional random field (CRF) and a rule-based subsystem, are used to identify PHI instances. Then, an ensemble learning-based classifiers is deployed to combine all PHI instances predicted by above three machine learning-based subsystems. Finally, the results of the ensemble learning-based classifier and the rule-based subsystem are merged together. Experiments conducted on the official test set show that our system achieves the highest micro F1-scores of 93.07%, 91.43% and 95.23% under the "token", "strict" and "binary token" criteria respectively, ranking first in the 2016 CEGS N-GRID NLP challenge. In addition, on the dataset of 2014 i2b2 NLP challenge, our system achieves the highest micro F1-scores of 96.98%, 95.11% and 98.28% under the "token", "strict" and "binary token" criteria respectively, outperforming other state-of-the-art systems. All these experiments prove the effectiveness of our proposed method. Copyright © 2017. Published by Elsevier Inc.

  4. Organization of Anti-Phase Synchronization Pattern in Neural Networks: What are the Key Factors?

    Science.gov (United States)

    Li, Dong; Zhou, Changsong

    2011-01-01

    Anti-phase oscillation has been widely observed in cortical neural network. Elucidating the mechanism underlying the organization of anti-phase pattern is of significance for better understanding more complicated pattern formations in brain networks. In dynamical systems theory, the organization of anti-phase oscillation pattern has usually been considered to relate to time delay in coupling. This is consistent to conduction delays in real neural networks in the brain due to finite propagation velocity of action potentials. However, other structural factors in cortical neural network, such as modular organization (connection density) and the coupling types (excitatory or inhibitory), could also play an important role. In this work, we investigate the anti-phase oscillation pattern organized on a two-module network of either neuronal cell model or neural mass model, and analyze the impact of the conduction delay times, the connection densities, and coupling types. Our results show that delay times and coupling types can play key roles in this organization. The connection densities may have an influence on the stability if an anti-phase pattern exists due to the other factors. Furthermore, we show that anti-phase synchronization of slow oscillations can be achieved with small delay times if there is interaction between slow and fast oscillations. These results are significant for further understanding more realistic spatiotemporal dynamics of cortico-cortical communications. PMID:22232576

  5. The attractor recurrent neural network based on fuzzy functions: An effective model for the classification of lung abnormalities.

    Science.gov (United States)

    Khodabakhshi, Mohammad Bagher; Moradi, Mohammad Hassan

    2017-05-01

    The respiratory system dynamic is of high significance when it comes to the detection of lung abnormalities, which highlights the importance of presenting a reliable model for it. In this paper, we introduce a novel dynamic modelling method for the characterization of the lung sounds (LS), based on the attractor recurrent neural network (ARNN). The ARNN structure allows the development of an effective LS model. Additionally, it has the capability to reproduce the distinctive features of the lung sounds using its formed attractors. Furthermore, a novel ARNN topology based on fuzzy functions (FFs-ARNN) is developed. Given the utility of the recurrent quantification analysis (RQA) as a tool to assess the nature of complex systems, it was used to evaluate the performance of both the ARNN and the FFs-ARNN models. The experimental results demonstrate the effectiveness of the proposed approaches for multichannel LS analysis. In particular, a classification accuracy of 91% was achieved using FFs-ARNN with sequences of RQA features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Behavior control in the sensorimotor loop with short-term synaptic dynamics induced by self-regulating neurons

    Directory of Open Access Journals (Sweden)

    Hazem eToutounji

    2014-05-01

    Full Text Available The behavior and skills of living systems depend on the distributed control provided by specialized and highly recurrent neural networks. Learning and memory in these systems is mediated by a set of adaptation mechanisms, known collectively as neuronal plasticity. Translating principles of recurrent neural control and plasticity to artificial agents has seen major strides, but is usually hampered by the complex interactions between the agent's body and its environment. One of the important standing issues is for the agent to support multiple stable states of behavior, so that its behavioral repertoire matches the requirements imposed by these interactions. The agent also must have the capacity to switch between these states in time scales that are comparable to those by which sensory stimulation varies. Achieving this requires a mechanism of short-term memory that allows the neurocontroller to keep track of the recent history of its input, which finds its biological counterpart in short-term synaptic plasticity. This issue is approached here by deriving synaptic dynamics in recurrent neural networks. Neurons are introduced as self-regulating units with a rich repertoire of dynamics. They exhibit homeostatic properties for certain parameter domains, which result in a set of stable states and the required short-term memory. They can also operate as oscillators, which allow them to surpass the level of activity imposed by their homeostatic operation conditions. Neural systems endowed with the derived synaptic dynamics can be utilized for the neural behavior control of autonomous mobile agents. The resulting behavior depends also on the underlying network structure, which is either engineered, or developed by evolutionary techniques. The effectiveness of these self-regulating units is demonstrated by controlling locomotion of a hexapod with eighteen degrees of freedom, and obstacle-avoidance of a wheel-driven robot.

  7. Neural and computational processes underlying dynamic changes in self-esteem

    Science.gov (United States)

    Rutledge, Robb B; Moutoussis, Michael; Dolan, Raymond J

    2017-01-01

    Self-esteem is shaped by the appraisals we receive from others. Here, we characterize neural and computational mechanisms underlying this form of social influence. We introduce a computational model that captures fluctuations in self-esteem engendered by prediction errors that quantify the difference between expected and received social feedback. Using functional MRI, we show these social prediction errors correlate with activity in ventral striatum/subgenual anterior cingulate cortex, while updates in self-esteem resulting from these errors co-varied with activity in ventromedial prefrontal cortex (vmPFC). We linked computational parameters to psychiatric symptoms using canonical correlation analysis to identify an ‘interpersonal vulnerability’ dimension. Vulnerability modulated the expression of prediction error responses in anterior insula and insula-vmPFC connectivity during self-esteem updates. Our findings indicate that updating of self-evaluative beliefs relies on learning mechanisms akin to those used in learning about others. Enhanced insula-vmPFC connectivity during updating of those beliefs may represent a marker for psychiatric vulnerability. PMID:29061228

  8. Neural and computational processes underlying dynamic changes in self-esteem.

    Science.gov (United States)

    Will, Geert-Jan; Rutledge, Robb B; Moutoussis, Michael; Dolan, Raymond J

    2017-10-24

    Self-esteem is shaped by the appraisals we receive from others. Here, we characterize neural and computational mechanisms underlying this form of social influence. We introduce a computational model that captures fluctuations in self-esteem engendered by prediction errors that quantify the difference between expected and received social feedback. Using functional MRI, we show these social prediction errors correlate with activity in ventral striatum/subgenual anterior cingulate cortex, while updates in self-esteem resulting from these errors co-varied with activity in ventromedial prefrontal cortex (vmPFC). We linked computational parameters to psychiatric symptoms using canonical correlation analysis to identify an 'interpersonal vulnerability' dimension. Vulnerability modulated the expression of prediction error responses in anterior insula and insula-vmPFC connectivity during self-esteem updates. Our findings indicate that updating of self-evaluative beliefs relies on learning mechanisms akin to those used in learning about others. Enhanced insula-vmPFC connectivity during updating of those beliefs may represent a marker for psychiatric vulnerability.

  9. The concept of self-organizing systems. Why bother?

    Science.gov (United States)

    Elverfeldt, Kirsten v.; Embleton-Hamann, Christine; Slaymaker, Olav

    2016-04-01

    Complexity theory and the concept of self-organizing systems provide a rather challenging conceptual framework for explaining earth systems change. Self-organization - understood as the aggregate processes internal to an environmental system that lead to a distinctive spatial or temporal organization - reduces the possibility of implicating a specific process as being causal, and it poses some restrictions on the idea that external drivers cause a system to change. The concept of self-organizing systems suggests that many phenomena result from an orchestration of different mechanisms, so that no causal role can be assigned to an individual factor or process. The idea that system change can be due to system-internal processes of self-organization thus proves a huge challenge to earth system research, especially in the context of global environmental change. In order to understand the concept's implications for the Earth Sciences, we need to know the characteristics of self-organizing systems and how to discern self-organizing systems. Within the talk, we aim firstly at characterizing self-organizing systems, and secondly at highlighting the advantages and difficulties of the concept within earth system sciences. The presentation concludes that: - The concept of self-organizing systems proves especially fruitful for small-scale earth surface systems. Beach cusps and patterned ground are only two of several other prime examples of self-organizing earth surface systems. They display characteristics of self-organization like (i) system-wide order from local interactions, (ii) symmetry breaking, (iii) distributed control, (iv) robustness and resilience, (v) nonlinearity and feedbacks, (vi) organizational closure, (vii) adaptation, and (viii) variation and selection. - It is comparatively easy to discern self-organization in small-scale systems, but to adapt the concept to larger scale systems relevant to global environmental change research is more difficult: Self-organizing

  10. Nonlinear dynamic systems identification using recurrent interval type-2 TSK fuzzy neural network - A novel structure.

    Science.gov (United States)

    El-Nagar, Ahmad M

    2018-01-01

    In this study, a novel structure of a recurrent interval type-2 Takagi-Sugeno-Kang (TSK) fuzzy neural network (FNN) is introduced for nonlinear dynamic and time-varying systems identification. It combines the type-2 fuzzy sets (T2FSs) and a recurrent FNN to avoid the data uncertainties. The fuzzy firing strengths in the proposed structure are returned to the network input as internal variables. The interval type-2 fuzzy sets (IT2FSs) is used to describe the antecedent part for each rule while the consequent part is a TSK-type, which is a linear function of the internal variables and the external inputs with interval weights. All the type-2 fuzzy rules for the proposed RIT2TSKFNN are learned on-line based on structure and parameter learning, which are performed using the type-2 fuzzy clustering. The antecedent and consequent parameters of the proposed RIT2TSKFNN are updated based on the Lyapunov function to achieve network stability. The obtained results indicate that our proposed network has a small root mean square error (RMSE) and a small integral of square error (ISE) with a small number of rules and a small computation time compared with other type-2 FNNs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Modeling Belt-Servomechanism by Chebyshev Functional Recurrent Neuro-Fuzzy Network

    Science.gov (United States)

    Huang, Yuan-Ruey; Kang, Yuan; Chu, Ming-Hui; Chang, Yeon-Pun

    A novel Chebyshev functional recurrent neuro-fuzzy (CFRNF) network is developed from a combination of the Takagi-Sugeno-Kang (TSK) fuzzy model and the Chebyshev recurrent neural network (CRNN). The CFRNF network can emulate the nonlinear dynamics of a servomechanism system. The system nonlinearity is addressed by enhancing the input dimensions of the consequent parts in the fuzzy rules due to functional expansion of a Chebyshev polynomial. The back propagation algorithm is used to adjust the parameters of the antecedent membership functions as well as those of consequent functions. To verify the performance of the proposed CFRNF, the experiment of the belt servomechanism is presented in this paper. Both of identification methods of adaptive neural fuzzy inference system (ANFIS) and recurrent neural network (RNN) are also studied for modeling of the belt servomechanism. The analysis and comparison results indicate that CFRNF makes identification of complex nonlinear dynamic systems easier. It is verified that the accuracy and convergence of the CFRNF are superior to those of ANFIS and RNN by the identification results of a belt servomechanism.

  12. Artificial neural network study on organ-targeting peptides

    Science.gov (United States)

    Jung, Eunkyoung; Kim, Junhyoung; Choi, Seung-Hoon; Kim, Minkyoung; Rhee, Hokyoung; Shin, Jae-Min; Choi, Kihang; Kang, Sang-Kee; Lee, Nam Kyung; Choi, Yun-Jaie; Jung, Dong Hyun

    2010-01-01

    We report a new approach to studying organ targeting of peptides on the basis of peptide sequence information. The positive control data sets consist of organ-targeting peptide sequences identified by the peroral phage-display technique for four organs, and the negative control data are prepared from random sequences. The capacity of our models to make appropriate predictions is validated by statistical indicators including sensitivity, specificity, enrichment curve, and the area under the receiver operating characteristic (ROC) curve (the ROC score). VHSE descriptor produces statistically significant training models and the models with simple neural network architectures show slightly greater predictive power than those with complex ones. The training and test set statistics indicate that our models could discriminate between organ-targeting and random sequences. We anticipate that our models will be applicable to the selection of organ-targeting peptides for generating peptide drugs or peptidomimetics.

  13. Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm

    Science.gov (United States)

    Karaca, Yeliz; Cattani, Carlo

    Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.

  14. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  15. Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.

    Science.gov (United States)

    Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam

    2016-01-01

    We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.

  16. Applying self-organizing map and modified radial based neural network for clustering and routing optimal path in wireless network

    Science.gov (United States)

    Hoomod, Haider K.; Kareem Jebur, Tuka

    2018-05-01

    Mobile ad hoc networks (MANETs) play a critical role in today’s wireless ad hoc network research and consist of active nodes that can be in motion freely. Because it consider very important problem in this network, we suggested proposed method based on modified radial basis function networks RBFN and Self-Organizing Map SOM. These networks can be improved by the use of clusters because of huge congestion in the whole network. In such a system, the performance of MANET is improved by splitting the whole network into various clusters using SOM. The performance of clustering is improved by the cluster head selection and number of clusters. Modified Radial Based Neural Network is very simple, adaptable and efficient method to increase the life time of nodes, packet delivery ratio and the throughput of the network will increase and connection become more useful because the optimal path has the best parameters from other paths including the best bitrate and best life link with minimum delays. Proposed routing algorithm depends on the group of factors and parameters to select the path between two points in the wireless network. The SOM clustering average time (1-10 msec for stall nodes) and (8-75 msec for mobile nodes). While the routing time range (92-510 msec).The proposed system is faster than the Dijkstra by 150-300%, and faster from the RBFNN (without modify) by 145-180%.

  17. Psychosis-proneness and neural correlates of self-inhibition in theory of mind.

    Directory of Open Access Journals (Sweden)

    Lisette van der Meer

    Full Text Available Impaired Theory of Mind (ToM has been repeatedly reported as a feature of psychotic disorders. ToM is crucial in social interactions and for the development of social behavior. It has been suggested that reasoning about the belief of others, requires inhibition of the self-perspective. We investigated the neural correlates of self-inhibition in nineteen low psychosis prone (PP and eighteen high PP subjects presenting with subclinical features. High PP subjects have a more than tenfold increased risk of developing a schizophrenia-spectrum disorder. Brain activation was measured with functional Magnetic Resonance Imaging during a ToM task differentiating between self-perspective inhibition and belief reasoning. Furthermore, to test underlying inhibitory mechanisms, we included a stop-signal task. We predicted worse behavioral performance for high compared to low PP subjects on both tasks. Moreover, based on previous neuroimaging results, different activation patterns were expected in the inferior frontal gyrus (IFG in high versus low PP subjects in self-perspective inhibition and simple response inhibition. Results showed increased activation in left IFG during self-perspective inhibition, but not during simple response inhibition, for high PP subjects as compared to low PP subjects. High and low PP subjects showed equal behavioral performance. The results suggest that at a neural level, high PP subjects need more resources for inhibiting the self-perspective, but not for simple motor response inhibition, to equal the performance of low PP subjects. This may reflect a compensatory mechanism, which may no longer be available for patients with schizophrenia-spectrum disorders resulting in ToM impairments.

  18. Psychosis-proneness and neural correlates of self-inhibition in theory of mind.

    Science.gov (United States)

    van der Meer, Lisette; Groenewold, Nynke A; Pijnenborg, Marieke; Aleman, André

    2013-01-01

    Impaired Theory of Mind (ToM) has been repeatedly reported as a feature of psychotic disorders. ToM is crucial in social interactions and for the development of social behavior. It has been suggested that reasoning about the belief of others, requires inhibition of the self-perspective. We investigated the neural correlates of self-inhibition in nineteen low psychosis prone (PP) and eighteen high PP subjects presenting with subclinical features. High PP subjects have a more than tenfold increased risk of developing a schizophrenia-spectrum disorder. Brain activation was measured with functional Magnetic Resonance Imaging during a ToM task differentiating between self-perspective inhibition and belief reasoning. Furthermore, to test underlying inhibitory mechanisms, we included a stop-signal task. We predicted worse behavioral performance for high compared to low PP subjects on both tasks. Moreover, based on previous neuroimaging results, different activation patterns were expected in the inferior frontal gyrus (IFG) in high versus low PP subjects in self-perspective inhibition and simple response inhibition. Results showed increased activation in left IFG during self-perspective inhibition, but not during simple response inhibition, for high PP subjects as compared to low PP subjects. High and low PP subjects showed equal behavioral performance. The results suggest that at a neural level, high PP subjects need more resources for inhibiting the self-perspective, but not for simple motor response inhibition, to equal the performance of low PP subjects. This may reflect a compensatory mechanism, which may no longer be available for patients with schizophrenia-spectrum disorders resulting in ToM impairments.

  19. Mindful attention reduces neural and self-reported cue-induced craving in smokers

    Science.gov (United States)

    Creswell, John David; Tabibnia, Golnaz; Julson, Erica; Kober, Hedy; Tindle, Hilary A.

    2013-01-01

    An emerging body of research suggests that mindfulness-based interventions may be beneficial for smoking cessation and the treatment of other addictive disorders. One way that mindfulness may facilitate smoking cessation is through the reduction of craving to smoking cues. The present work considers whether mindful attention can reduce self-reported and neural markers of cue-induced craving in treatment seeking smokers. Forty-seven (n = 47) meditation-naïve treatment-seeking smokers (12-h abstinent from smoking) viewed and made ratings of smoking and neutral images while undergoing functional magnetic resonance imaging (fMRI). Participants were trained and instructed to view these images passively or with mindful attention. Results indicated that mindful attention reduced self-reported craving to smoking images, and reduced neural activity in a craving-related region of subgenual anterior cingulate cortex (sgACC). Moreover, a psychophysiological interaction analysis revealed that mindful attention reduced functional connectivity between sgACC and other craving-related regions compared to passively viewing smoking images, suggesting that mindfulness may decouple craving neurocircuitry when viewing smoking cues. These results provide an initial indication that mindful attention may describe a ‘bottom-up’ attention to one’s present moment experience in ways that can help reduce subjective and neural reactivity to smoking cues in smokers. PMID:22114078

  20. Self-teaching neural network learns difficult reactor control problem

    International Nuclear Information System (INIS)

    Jouse, W.C.

    1989-01-01

    A self-teaching neural network used as an adaptive controller quickly learns to control an unstable reactor configuration. The network models the behavior of a human operator. It is trained by allowing it to operate the reactivity control impulsively. It is punished whenever either the power or fuel temperature stray outside technical limits. Using a simple paradigm, the network constructs an internal representation of the punishment and of the reactor system. The reactor is constrained to small power orbits

  1. A Recurrent Neural Network Approach to Rear Vehicle Detection Which Considered State Dependency

    Directory of Open Access Journals (Sweden)

    Kayichirou Inagaki

    2003-08-01

    Full Text Available Experimental vision-based detection often fails in cases when the acquired image quality is reduced by changing optical environments. In addition, the shape of vehicles in images that are taken from vision sensors change due to approaches by vehicle. Vehicle detection methods are required to perform successfully under these conditions. However, the conventional methods do not consider especially in rapidly varying by brightness conditions. We suggest a new detection method that compensates for those conditions in monocular vision-based vehicle detection. The suggested method employs a Recurrent Neural Network (RNN, which has been applied for spatiotemporal processing. The RNN is able to respond to consecutive scenes involving the target vehicle and can track the movements of the target by the effect of the past network states. The suggested method has a particularly beneficial effect in environments with sudden, extreme variations such as bright sunlight and shield. Finally, we demonstrate effectiveness by state-dependent of the RNN-based method by comparing its detection results with those of a Multi Layered Perceptron (MLP.

  2. An interpretable LSTM neural network for autoregressive exogenous model

    OpenAIRE

    Guo, Tian; Lin, Tao; Lu, Yao

    2018-01-01

    In this paper, we propose an interpretable LSTM recurrent neural network, i.e., multi-variable LSTM for time series with exogenous variables. Currently, widely used attention mechanism in recurrent neural networks mostly focuses on the temporal aspect of data and falls short of characterizing variable importance. To this end, our multi-variable LSTM equipped with tensorized hidden states is developed to learn variable specific representations, which give rise to both temporal and variable lev...

  3. Built-in self-repair of VLSI memories employing neural nets

    Science.gov (United States)

    Mazumder, Pinaki

    1998-10-01

    The decades of the Eighties and the Nineties have witnessed the spectacular growth of VLSI technology, when the chip size has increased from a few hundred devices to a staggering multi-millon transistors. This trend is expected to continue as the CMOS feature size progresses towards the nanometric dimension of 100 nm and less. SIA roadmap projects that, where as the DRAM chips will integrate over 20 billion devices in the next millennium, the future microprocessors may incorporate over 100 million transistors on a single chip. As the VLSI chip size increase, the limited accessibility of circuit components poses great difficulty for external diagnosis and replacement in the presence of faulty components. For this reason, extensive work has been done in built-in self-test techniques, but little research is known concerning built-in self-repair. Moreover, the extra hardware introduced by conventional fault-tolerance techniques is also likely to become faulty, therefore causing the circuit to be useless. This research demonstrates the feasibility of implementing electronic neural networks as intelligent hardware for memory array repair. Most importantly, we show that the neural network control possesses a robust and degradable computing capability under various fault conditions. Overall, a yield analysis performed on 64K DRAM's shows that the yield can be improved from as low as 20 percent to near 99 percent due to the self-repair design, with overhead no more than 7 percent.

  4. Phase transitions and self-organized criticality in networks of stochastic spiking neurons.

    Science.gov (United States)

    Brochini, Ludmila; de Andrade Costa, Ariadne; Abadi, Miguel; Roque, Antônio C; Stolfi, Jorge; Kinouchi, Osame

    2016-11-07

    Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report analytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(V) of the membrane potential V, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depending on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce SOC: the use of dynamic neuronal gains - a form of short-term plasticity probably located at the axon initial segment (AIS) - instead of depressing synapses at the dendrites (as previously studied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing.

  5. Auto-Associative Recurrent Neural Networks and Long Term Dependencies in Novelty Detection for Audio Surveillance Applications

    Science.gov (United States)

    Rossi, A.; Montefoschi, F.; Rizzo, A.; Diligenti, M.; Festucci, C.

    2017-10-01

    Machine Learning applied to Automatic Audio Surveillance has been attracting increasing attention in recent years. In spite of several investigations based on a large number of different approaches, little attention had been paid to the environmental temporal evolution of the input signal. In this work, we propose an exploration in this direction comparing the temporal correlations extracted at the feature level with the one learned by a representational structure. To this aim we analysed the prediction performances of a Recurrent Neural Network architecture varying the length of the processed input sequence and the size of the time window used in the feature extraction. Results corroborated the hypothesis that sequential models work better when dealing with data characterized by temporal order. However, so far the optimization of the temporal dimension remains an open issue.

  6. Neural Correlates of Biased Responses: The Negative Method Effect in the Rosenberg Self-Esteem Scale Is Associated with Right Amygdala Volume.

    Science.gov (United States)

    Wang, Yinan; Kong, Feng; Huang, Lijie; Liu, Jia

    2016-10-01

    Self-esteem is a widely studied construct in psychology that is typically measured by the Rosenberg Self-Esteem Scale (RSES). However, a series of cross-sectional and longitudinal studies have suggested that a simple and widely used unidimensional factor model does not provide an adequate explanation of RSES responses due to method effects. To identify the neural correlates of the method effect, we sought to determine whether and how method effects were associated with the RSES and investigate the neural basis of these effects. Two hundred and eighty Chinese college students (130 males; mean age = 22.64 years) completed the RSES and underwent magnetic resonance imaging (MRI). Behaviorally, method effects were linked to both positively and negatively worded items in the RSES. Neurally, the right amygdala volume negatively correlated with the negative method factor, while the hippocampal volume positively correlated with the general self-esteem factor in the RSES. The neural dissociation between the general self-esteem factor and negative method factor suggests that there are different neural mechanisms underlying them. The amygdala is involved in modulating negative affectivity; therefore, the current study sheds light on the nature of method effects that are related to self-report with a mix of positively and negatively worded items. © 2015 Wiley Periodicals, Inc.

  7. Artificial neural network simulation of battery performance

    Energy Technology Data Exchange (ETDEWEB)

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  8. A Discrete-Time Recurrent Neural Network for Solving Rank-Deficient Matrix Equations With an Application to Output Regulation of Linear Systems.

    Science.gov (United States)

    Liu, Tao; Huang, Jie

    2017-04-17

    This paper presents a discrete-time recurrent neural network approach to solving systems of linear equations with two features. First, the system of linear equations may not have a unique solution. Second, the system matrix is not known precisely, but a sequence of matrices that converges to the unknown system matrix exponentially is known. The problem is motivated from solving the output regulation problem for linear systems. Thus, an application of our main result leads to an online solution to the output regulation problem for linear systems.

  9. The self-organizing map, a new approach to apprehend the Madden–Julian Oscillation influence on the intraseasonal variability of rainfall in the southern African region

    CSIR Research Space (South Africa)

    Oettli, P

    2013-11-01

    Full Text Available -linear classification method, the self-organizing map (SOM), a type of artificial neural network used to produce a low-dimensional representation of high-dimensional datasets, to capture more accurately the life cycle of the MJO and its global impacts...

  10. A neural network based seafloor classification using acoustic backscatter

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    This paper presents a study results of the Artificial Neural Network (ANN) architectures [Self-Organizing Map (SOM) and Multi-Layer Perceptron (MLP)] using single beam echosounding data. The single beam echosounder, operable at 12 kHz, has been used...

  11. Distributed Recurrent Neural Forward Models with Neural Control for Complex Locomotion in Walking Robots

    DEFF Research Database (Denmark)

    Dasgupta, Sakyasingha; Goldschmidt, Dennis; Wörgötter, Florentin

    2015-01-01

    here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated......Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental...... conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain...

  12. Direct Neural Conversion from Human Fibroblasts Using Self-Regulating and Nonintegrating Viral Vectors

    Directory of Open Access Journals (Sweden)

    Shong Lau

    2014-12-01

    Full Text Available Summary: Recent findings show that human fibroblasts can be directly programmed into functional neurons without passing via a proliferative stem cell intermediate. These findings open up the possibility of generating subtype-specific neurons of human origin for therapeutic use from fetal cell, from patients themselves, or from matched donors. In this study, we present an improved system for direct neural conversion of human fibroblasts. The neural reprogramming genes are regulated by the neuron-specific microRNA, miR-124, such that each cell turns off expression of the reprogramming genes once the cell has reached a stable neuronal fate. The regulated system can be combined with integrase-deficient vectors, providing a nonintegrative and self-regulated conversion system that rids problems associated with the integration of viral transgenes into the host genome. These modifications make the system suitable for clinical use and therefore represent a major step forward in the development of induced neurons for cell therapy. : Lau et al. now use miRNA targeting to build a self-regulating neural conversion system. Combined with nonintegrating vectors, this system can efficiently drive conversion of human fibroblasts into functional induced neurons (iNs suitable for clinical applications.

  13. Organizing artistic activities in a recurrent manner : (on the nature of) entrepreneurship in the performing arts

    NARCIS (Netherlands)

    Bergamini, Michela; Van de Velde, Ward; Van Looy, Bart; Visscher, Klaasjan

    2017-01-01

    A majority of performing arts organizations active in classical music, theatre, and contemporary dance rely on funding from "third parties" in order to organize productions in a recurrent manner. We adopt an entrepreneurial perspective to inform the debate on the economic sustainability of

  14. Modeling and control of magnetorheological fluid dampers using neural networks

    Science.gov (United States)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  15. Self-learning Monte Carlo with deep neural networks

    Science.gov (United States)

    Shen, Huitao; Liu, Junwei; Fu, Liang

    2018-05-01

    The self-learning Monte Carlo (SLMC) method is a general algorithm to speedup MC simulations. Its efficiency has been demonstrated in various systems by introducing an effective model to propose global moves in the configuration space. In this paper, we show that deep neural networks can be naturally incorporated into SLMC, and without any prior knowledge can learn the original model accurately and efficiently. Demonstrated in quantum impurity models, we reduce the complexity for a local update from O (β2) in Hirsch-Fye algorithm to O (β lnβ ) , which is a significant speedup especially for systems at low temperatures.

  16. Development of test algorithm for semiconductor package with defects by using probabilistic neural network

    International Nuclear Information System (INIS)

    Kim, Jae Yeol; Sim, Jae Gi; Ko, Myoung Soo; Kim, Chang Hyun; Kim, Hun Cho

    2001-01-01

    In this study, researchers developing the estimative algorithm for artificial defects in semiconductor packages and performing it by pattern recognition technology. For this purpose, the estimative algorithm was included that researchers made software with MATLAB. The software consists of some procedures including ultrasonic image acquisition, equalization filtering, Self-Organizing Map and Probabilistic Neural Network. Self-Organizing Map and Probabilistic Neural Network are belong to methods of Neural Networks. And the pattern recognition technology has applied to classify three kinds of detective patterns in semiconductor packages. This study presumes probability density function from a sample of learning and present which is automatically determine method. PNN can distinguish flaws very difficult distinction as well as. This can do parallel process to stand in a row we confirm that is very efficiently classifier if we applied many data real the process.

  17. Neural representations of the self and the mother for Chinese individuals.

    Directory of Open Access Journals (Sweden)

    Gaowa Wuyun

    Full Text Available An important question in social neuroscience is the similarities and differences in the neural representations between the self and close others. Most studies examining this topic have identified the medial prefrontal cortex (MPFC region as the primary area involved in this process. However, several studies have reported conflicting data, making further investigation of this topic very important. In this functional magnetic resonance imaging (fMRI study, we investigated the brain activity in the anterior cingulate cortex (ACC when Chinese participants passively listened to their self-name (SN, their mother's name (MN, and unknown names (UN. The results showed that compared with UN recognition, SN perception was associated with a robust activation in a widely distributed bilateral network, including the cortical midline structure (the MPFC and ACC, the inferior frontal gyrus, and the middle temporal gyrus. The SN invoked the bilateral superior temporal gyrus in contrast to the MN; the MN recognition provoked a stronger activation in the central and posterior brain regions in contrast to the SN recognition. The SN and MN caused an activation of overlapping areas, namely, the ACC, MPFC, and superior frontal gyrus. These results suggest that Chinese individuals utilize certain common brain region in processing both the SN and the MN. The present findings provide evidence for the neural basis of the self and close others for Chinese individuals.

  18. Combination of Deep Recurrent Neural Networks and Conditional Random Fields for Extracting Adverse Drug Reactions from User Reviews.

    Science.gov (United States)

    Tutubalina, Elena; Nikolenko, Sergey

    2017-01-01

    Adverse drug reactions (ADRs) are an essential part of the analysis of drug use, measuring drug use benefits, and making policy decisions. Traditional channels for identifying ADRs are reliable but very slow and only produce a small amount of data. Text reviews, either on specialized web sites or in general-purpose social networks, may lead to a data source of unprecedented size, but identifying ADRs in free-form text is a challenging natural language processing problem. In this work, we propose a novel model for this problem, uniting recurrent neural architectures and conditional random fields. We evaluate our model with a comprehensive experimental study, showing improvements over state-of-the-art methods of ADR extraction.

  19. Combination of Deep Recurrent Neural Networks and Conditional Random Fields for Extracting Adverse Drug Reactions from User Reviews

    Directory of Open Access Journals (Sweden)

    Elena Tutubalina

    2017-01-01

    Full Text Available Adverse drug reactions (ADRs are an essential part of the analysis of drug use, measuring drug use benefits, and making policy decisions. Traditional channels for identifying ADRs are reliable but very slow and only produce a small amount of data. Text reviews, either on specialized web sites or in general-purpose social networks, may lead to a data source of unprecedented size, but identifying ADRs in free-form text is a challenging natural language processing problem. In this work, we propose a novel model for this problem, uniting recurrent neural architectures and conditional random fields. We evaluate our model with a comprehensive experimental study, showing improvements over state-of-the-art methods of ADR extraction.

  20. Applying long short-term memory recurrent neural networks to intrusion detection

    Directory of Open Access Journals (Sweden)

    Ralf C. Staudemeyer

    2015-07-01

    Full Text Available We claim that modelling network traffic as a time series with a supervised learning approach, using known genuine and malicious behaviour, improves intrusion detection. To substantiate this, we trained long short-term memory (LSTM recurrent neural networks with the training data provided by the DARPA / KDD Cup ’99 challenge. To identify suitable LSTM-RNN network parameters and structure we experimented with various network topologies. We found networks with four memory blocks containing two cells each offer a good compromise between computational cost and detection performance. We applied forget gates and shortcut connections respectively. A learning rate of 0.1 and up to 1,000 epochs showed good results. We tested the performance on all features and on extracted minimal feature sets respectively. We evaluated different feature sets for the detection of all attacks within one network and also to train networks specialised on individual attack classes. Our results show that the LSTM classifier provides superior performance in comparison to results previously published results of strong static classifiers. With 93.82% accuracy and 22.13 cost, LSTM outperforms the winning entries of the KDD Cup ’99 challenge by far. This is due to the fact that LSTM learns to look back in time and correlate consecutive connection records. For the first time ever, we have demonstrated the usefulness of LSTM networks to intrusion detection.

  1. Enhancement of signal sensitivity in a heterogeneous neural network refined from synaptic plasticity

    Energy Technology Data Exchange (ETDEWEB)

    Li Xiumin; Small, Michael, E-mail: ensmall@polyu.edu.h, E-mail: 07901216r@eie.polyu.edu.h [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong)

    2010-08-15

    Long-term synaptic plasticity induced by neural activity is of great importance in informing the formation of neural connectivity and the development of the nervous system. It is reasonable to consider self-organized neural networks instead of prior imposition of a specific topology. In this paper, we propose a novel network evolved from two stages of the learning process, which are respectively guided by two experimentally observed synaptic plasticity rules, i.e. the spike-timing-dependent plasticity (STDP) mechanism and the burst-timing-dependent plasticity (BTDP) mechanism. Due to the existence of heterogeneity in neurons that exhibit different degrees of excitability, a two-level hierarchical structure is obtained after the synaptic refinement. This self-organized network shows higher sensitivity to afferent current injection compared with alternative archetypal networks with different neural connectivity. Statistical analysis also demonstrates that it has the small-world properties of small shortest path length and high clustering coefficients. Thus the selectively refined connectivity enhances the ability of neuronal communications and improves the efficiency of signal transmission in the network.

  2. Enhancement of signal sensitivity in a heterogeneous neural network refined from synaptic plasticity

    International Nuclear Information System (INIS)

    Li Xiumin; Small, Michael

    2010-01-01

    Long-term synaptic plasticity induced by neural activity is of great importance in informing the formation of neural connectivity and the development of the nervous system. It is reasonable to consider self-organized neural networks instead of prior imposition of a specific topology. In this paper, we propose a novel network evolved from two stages of the learning process, which are respectively guided by two experimentally observed synaptic plasticity rules, i.e. the spike-timing-dependent plasticity (STDP) mechanism and the burst-timing-dependent plasticity (BTDP) mechanism. Due to the existence of heterogeneity in neurons that exhibit different degrees of excitability, a two-level hierarchical structure is obtained after the synaptic refinement. This self-organized network shows higher sensitivity to afferent current injection compared with alternative archetypal networks with different neural connectivity. Statistical analysis also demonstrates that it has the small-world properties of small shortest path length and high clustering coefficients. Thus the selectively refined connectivity enhances the ability of neuronal communications and improves the efficiency of signal transmission in the network.

  3. Relation Classification via Recurrent Neural Network

    OpenAIRE

    Zhang, Dongxu; Wang, Dong

    2015-01-01

    Deep learning has gained much success in sentence-level relation classification. For example, convolutional neural networks (CNN) have delivered competitive performance without much effort on feature engineering as the conventional pattern-based methods. Thus a lot of works have been produced based on CNN structures. However, a key issue that has not been well addressed by the CNN-based method is the lack of capability to learn temporal features, especially long-distance dependency between no...

  4. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    Science.gov (United States)

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  5. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    Directory of Open Access Journals (Sweden)

    Waddah Waheeb

    Full Text Available Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN and the Dynamic Ridge Polynomial Neural Network (DRPNN. Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  6. Neural correlates of self-deception and impression-management.

    Science.gov (United States)

    Farrow, Tom F D; Burgess, Jenny; Wilkinson, Iain D; Hunter, Michael D

    2015-01-01

    Self-deception and impression-management comprise two types of deceptive, but generally socially acceptable behaviours, which are common in everyday life as well as being present in a number of psychiatric disorders. We sought to establish and dissociate the 'normal' brain substrates of self-deception and impression-management. Twenty healthy participants underwent fMRI scanning at 3T whilst completing the 'Balanced Inventory of Desirable Responding' test under two conditions: 'fake good', giving the most desirable impression possible and 'fake bad' giving an undesirable impression. Impression-management scores were more malleable to manipulation via 'faking' than self-deception scores. Response times to self-deception questions and 'fake bad' instructions were significantly longer than to impression-management questions and 'fake good' instructions respectively. Self-deception and impression-management manipulation and 'faking bad' were associated with activation of medial prefrontal cortex (mPFC) and left ventrolateral prefrontal cortex (vlPFC). Impression-management manipulation was additionally associated with activation of left dorsolateral prefrontal cortex and left posterior middle temporal gyrus. 'Faking bad' was additionally associated with activation of right vlPFC, left temporo-parietal junction and right cerebellum. There were no supra-threshold activations associated with 'faking good'. Our neuroimaging data suggest that manipulating self-deception and impression-management and more specifically 'faking bad' engages a common network comprising mPFC and left vlPFC. Shorter response times and lack of dissociable neural activations suggests that 'faking good', particularly when it comes to impression-management, may be our most practiced 'default' mode. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Self-tuning control of a nuclear reactor using a Gaussian function neural network

    International Nuclear Information System (INIS)

    Park, M.G.; Cho, N.Z.

    1995-01-01

    A self-tuning control method is described for a nuclear reactor system that requires only a set of input-output measurements. The use of an artificial neural network in nonlinear model-based adaptive control, both as a plant model and a controller, is investigated. A neural network called a Gaussian function network is used for one-step-ahead predictive control to track the desired plant output. The effectiveness of the controller is demonstrated by the application of the method to the power tracking control of the Korea Multipurpose Research Reactor

  8. Navigating recurrent abdominal pain through clinical clues, red flags, and initial testing.

    Science.gov (United States)

    Noe, Joshua D; Li, B U K

    2009-05-01

    Recurrent abdominal pain is a common chronic complaint that presents to your office. The constant challenge is one of detecting those with organic disease from the majority who have a functional pain disorder including functional dyspepsia, irritable bowel syndrome, functional abdominal pain, and abdominal migraine. Beginning with a detailed history and physical exam, you can: 1) apply the symptom-based Rome III criteria to positively identify a functional disorder, and 2) filter these findings through the diagnostic clues and red flags that point toward specific organic disease and/or further testing. Once a functional diagnosis has been made or an organic disease is suspected, you can initiate a self-limited empiric therapeutic trial. With this diagnostic approach, you should feel confident navigating through the initial evaluation, management, and consultation referral for a child or adolescent with recurrent abdominal pain.

  9. Hierarchical modular structure enhances the robustness of self-organized criticality in neural networks

    International Nuclear Information System (INIS)

    Wang Shengjun; Zhou Changsong

    2012-01-01

    One of the most prominent architecture properties of neural networks in the brain is the hierarchical modular structure. How does the structure property constrain or improve brain function? It is thought that operating near criticality can be beneficial for brain function. Here, we find that networks with modular structure can extend the parameter region of coupling strength over which critical states are reached compared to non-modular networks. Moreover, we find that one aspect of network function—dynamical range—is highest for the same parameter region. Thus, hierarchical modularity enhances robustness of criticality as well as function. However, too much modularity constrains function by preventing the neural networks from reaching critical states, because the modular structure limits the spreading of avalanches. Our results suggest that the brain may take advantage of the hierarchical modular structure to attain criticality and enhanced function. (paper)

  10. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  11. ORGANIC ELECTRODE COATINGS FOR NEXT-GENERATION NEURAL INTERFACES

    Directory of Open Access Journals (Sweden)

    Ulises A Aregueta-Robles

    2014-05-01

    Full Text Available Traditional neuronal interfaces utilize metallic electrodes which in recent years have reached a plateau in terms of the ability to provide safe stimulation at high resolution or rather with high densities of microelectrodes with improved spatial selectivity. To achieve higher resolution it has become clear that reducing the size of electrodes is required to enable higher electrode counts from the implant device. The limitations of interfacing electrodes including low charge injection limits, mechanical mismatch and foreign body response can be addressed through the use of organic electrode coatings which typically provide a softer, more roughened surface to enable both improved charge transfer and lower mechanical mismatch with neural tissue. Coating electrodes with conductive polymers or carbon nanotubes offers a substantial increase in charge transfer area compared to conventional platinum electrodes. These organic conductors provide safe electrical stimulation of tissue while avoiding undesirable chemical reactions and cell damage. However, the mechanical properties of conductive polymers are not ideal, as they are quite brittle. Hydrogel polymers present a versatile coating option for electrodes as they can be chemically modified to provide a soft and conductive scaffold. However, the in vivo chronic inflammatory response of these conductive hydrogels remains unknown. A more recent approach proposes tissue engineering the electrode interface through the use of encapsulated neurons within hydrogel coatings. This approach may provide a method for activating tissue at the cellular scale, however several technological challenges must be addressed to demonstrate feasibility of this innovative idea. The review focuses on the various organic coatings which have been investigated to improve neural interface electrodes.

  12. Recurrent fuzzy neural network backstepping control for the prescribed output tracking performance of nonlinear dynamic systems.

    Science.gov (United States)

    Han, Seong-Ik; Lee, Jang-Myung

    2014-01-01

    This paper proposes a backstepping control system that uses a tracking error constraint and recurrent fuzzy neural networks (RFNNs) to achieve a prescribed tracking performance for a strict-feedback nonlinear dynamic system. A new constraint variable was defined to generate the virtual control that forces the tracking error to fall within prescribed boundaries. An adaptive RFNN was also used to obtain the required improvement on the approximation performances in order to avoid calculating the explosive number of terms generated by the recursive steps of traditional backstepping control. The boundedness and convergence of the closed-loop system was confirmed based on the Lyapunov stability theory. The prescribed performance of the proposed control scheme was validated by using it to control the prescribed error of a nonlinear system and a robot manipulator. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Recurrent Selection and Participatory Plant Breeding for Improvement of Two Organic Open-Pollinated Sweet Corn (Zea mays L. Populations

    Directory of Open Access Journals (Sweden)

    Adrienne C. Shelton

    2015-04-01

    Full Text Available Organic growers face unique challenges when raising sweet corn, and benefit from varieties that maintain high eating quality, germinate consistently, deter insect pests, and resist diseases. Genotype by environment rank changes can occur in the performance of cultivars grown on conventional and organic farms, yet few varieties have been bred specifically for organic systems. The objective of this experiment was to evaluate the changes made to open-pollinated sweet corn populations using recurrent selection and a participatory plant breeding (PPB methodology. From 2008 to 2011, four cycles of two open-pollinated (OP sweet corn populations were selected on a certified organic farm in Minnesota using a modified ear-to-row recurrent selection scheme. Selections were made in collaboration with an organic farmer, with selection criteria based on traits identified by the farmer. In 2012 and 2013, the population cycles were evaluated in a randomized complete block design in two certified organic locations in Wisconsin, with multiple replications in each environment. Significant linear trends were found among cycles of selection for quantitative and qualitative traits, suggesting the changes were due to recurrent selection and PPB methodology for these populations. However, further improvement is necessary to satisfy the requirements for a useful cultivar for organic growers.

  14. Recurrent fuzzy neural network by using feedback error learning approaches for LFC in interconnected power system

    International Nuclear Information System (INIS)

    Sabahi, Kamel; Teshnehlab, Mohammad; Shoorhedeli, Mahdi Aliyari

    2009-01-01

    In this study, a new adaptive controller based on modified feedback error learning (FEL) approaches is proposed for load frequency control (LFC) problem. The FEL strategy consists of intelligent and conventional controllers in feedforward and feedback paths, respectively. In this strategy, a conventional feedback controller (CFC), i.e. proportional, integral and derivative (PID) controller, is essential to guarantee global asymptotic stability of the overall system; and an intelligent feedforward controller (INFC) is adopted to learn the inverse of the controlled system. Therefore, when the INFC learns the inverse of controlled system, the tracking of reference signal is done properly. Generally, the CFC is designed at nominal operating conditions of the system and, therefore, fails to provide the best control performance as well as global stability over a wide range of changes in the operating conditions of the system. So, in this study a supervised controller (SC), a lookup table based controller, is addressed for tuning of the CFC. During abrupt changes of the power system parameters, the SC adjusts the PID parameters according to these operating conditions. Moreover, for improving the performance of overall system, a recurrent fuzzy neural network (RFNN) is adopted in INFC instead of the conventional neural network, which was used in past studies. The proposed FEL controller has been compared with the conventional feedback error learning controller (CFEL) and the PID controller through some performance indices

  15. PLGA nanofibers blended with designer self-assembling peptides for peripheral neural regeneration

    Energy Technology Data Exchange (ETDEWEB)

    Nune, Manasa; Krishnan, Uma Maheswari; Sethuraman, Swaminathan, E-mail: swami@sastra.edu

    2016-05-01

    Electrospun nanofibers are attractive candidates for neural regeneration due to similarity to the extracellular matrix. Several synthetic polymers have been used but they lack in providing the essential biorecognition motifs on their surfaces. Self-assembling peptide nanofiber scaffolds (SAPNFs) like RADA16 and recently, designer SAPs with functional motifs RADA16-I-BMHP1 areexamples, which showed successful spinal cord regeneration. But these peptide nanofiber scaffolds have poor mechanical properties and faster degradation rates that limit their use for larger nerve defects. Hence, we have developed a novel hybrid nanofiber scaffold of polymer poly(L-lactide-co-glycolide) (PLGA) and RADA16-I-BMHP1. The scaffolds were characterized for the presence of peptides both qualitatively and quantitatively using several techniques like SEM, EDX, FTIR, CHN analysis, Circular Dichroism analysis, Confocal and thermal analysis. Peptide self-assembly was retained post-electrospinning and formed rod-like nanostructures on PLGA nanofibers. In vitro cell compatibility was studied using rat Schwann cells and their adhesion, proliferation and gene expression levels on the designed scaffolds were evaluated. Our results have revealed the significant effects of the peptide blended scaffolds on promoting Schwann cell adhesion, extension and phenotypic expression. Neural development markers (SEM3F, NRP2 & PLX1) gene expression levels were significantly upregulated in peptide blended scaffolds compared to the PLGA scaffolds. Thus the hybrid blended novel designer scaffolds seem to be promising candidates for successful and functional regeneration of the peripheral nerve. - Highlights: • A novel blended scaffold of polymer PLGA and designer self-assembling peptide RADA16-I-BMPH1 was designed • The peptide retained the self-assembling features and formed rod like nanostructures on top of PLGA nanofibers • PLGA-peptide scaffolds have promoted the Schwann cell bipolar extension and

  16. Mining e-cigarette adverse events in social media using Bi-LSTM recurrent neural network with word embedding representation.

    Science.gov (United States)

    Xie, Jiaheng; Liu, Xiao; Dajun Zeng, Daniel

    2018-01-01

    Recent years have seen increased worldwide popularity of e-cigarette use. However, the risks of e-cigarettes are underexamined. Most e-cigarette adverse event studies have achieved low detection rates due to limited subject sample sizes in the experiments and surveys. Social media provides a large data repository of consumers' e-cigarette feedback and experiences, which are useful for e-cigarette safety surveillance. However, it is difficult to automatically interpret the informal and nontechnical consumer vocabulary about e-cigarettes in social media. This issue hinders the use of social media content for e-cigarette safety surveillance. Recent developments in deep neural network methods have shown promise for named entity extraction from noisy text. Motivated by these observations, we aimed to design a deep neural network approach to extract e-cigarette safety information in social media. Our deep neural language model utilizes word embedding as the representation of text input and recognizes named entity types with the state-of-the-art Bidirectional Long Short-Term Memory (Bi-LSTM) Recurrent Neural Network. Our Bi-LSTM model achieved the best performance compared to 3 baseline models, with a precision of 94.10%, a recall of 91.80%, and an F-measure of 92.94%. We identified 1591 unique adverse events and 9930 unique e-cigarette components (ie, chemicals, flavors, and devices) from our research testbed. Although the conditional random field baseline model had slightly better precision than our approach, our Bi-LSTM model achieved much higher recall, resulting in the best F-measure. Our method can be generalized to extract medical concepts from social media for other medical applications. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. Identifying Emotions on the Basis of Neural Activation.

    Science.gov (United States)

    Kassam, Karim S; Markey, Amanda R; Cherkassky, Vladimir L; Loewenstein, George; Just, Marcel Adam

    2013-01-01

    We attempt to determine the discriminability and organization of neural activation corresponding to the experience of specific emotions. Method actors were asked to self-induce nine emotional states (anger, disgust, envy, fear, happiness, lust, pride, sadness, and shame) while in an fMRI scanner. Using a Gaussian Naïve Bayes pooled variance classifier, we demonstrate the ability to identify specific emotions experienced by an individual at well over chance accuracy on the basis of: 1) neural activation of the same individual in other trials, 2) neural activation of other individuals who experienced similar trials, and 3) neural activation of the same individual to a qualitatively different type of emotion induction. Factor analysis identified valence, arousal, sociality, and lust as dimensions underlying the activation patterns. These results suggest a structure for neural representations of emotion and inform theories of emotional processing.

  18. Stimulus-dependent suppression of chaos in recurrent neural networks

    International Nuclear Information System (INIS)

    Rajan, Kanaka; Abbott, L. F.; Sompolinsky, Haim

    2010-01-01

    Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, but they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a nonmonotonic function of stimulus frequency, revealing a 'resonant' frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.

  19. Self-organizing representations

    Energy Technology Data Exchange (ETDEWEB)

    Kohonen, T.

    1983-01-01

    A property which is commonplace in the brain but which has always been ignored in learning machines is the spatial order of the processing units. This order is clearly highly significant and in nature it develops gradually during the lifetime of the organism. It then serves as the basis for perceptual and cognitive processes, and memory, too. The spatial order in biological organisms is often believed to be genetically determined. It is therefore intriguing to learn that a meaningful and optimal spatial order is formed in an extremely simple self-organizing process whereby certain feature maps are formed automatically. 8 references.

  20. Fuzzy-neural network in the automatic detection and volumetry of the spleen on spiral CT scans

    International Nuclear Information System (INIS)

    Heitmann, K.R.; Mainz Univ.; Rueckert, S.; Heussel, C.P.; Thelen, M.; Kauczor, H.U.; Uthmann, T.

    2000-01-01

    Purpose: To assess spleen segmentation and volumetry in spiral CT scans with and without pathological changes of splenic tissue. Methods: The image analysis software HYBRIKON is based on region growing, self-organized neural nets, and fuzzy-anatomic rules. The neural nets were trained with spiral CT data from 10 patients, not used in the following evaluation on spiral CT scans from 19 patients. An experienced radiologist verified the results. The true positive and false positive areas were compared in terms to the areas marked by the radiologist. The results were compared with a standard thresholding method. Results: The neural nets achieved a higher accuracy than the thresholding method. Correlation coefficient of the fuzzy-neural nets: 0.99 (thresholding: 0.63). Mean true positive rate: 90% (thresholding: 75%), mean false positive rate: 5% (thresholding>100%). Pitfalls were caused by accessory spleens, extreme changes in the morphology (tumors, metastases, cysts), and parasplenic masses. Conclusions: Self-organizing neural nets combined with fuzzy rules are ready for use in the automatic detection and volumetry of the spleen in spiral CT scans. (orig.) [de